Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840615
Raphaël Féraud, O. Bernier, J. Viallet, M. Collobert
Detecting faces in images with complex backgrounds is a difficult task. Our approach, which obtains state-of-the-art results, is based on a generative neural network model: the constrained generative model (CGM). To detect side-view faces and to decrease the number of false alarms, a conditional mixture of networks is used. To decrease the computational time cost, a fast search algorithm is proposed. The level of performance reached, in terms of detection accuracy and processing time, allows us to apply this detector to a real-world application: the indexation of face images on the WWW.
{"title":"A fast and accurate face detector for indexation of face images","authors":"Raphaël Féraud, O. Bernier, J. Viallet, M. Collobert","doi":"10.1109/AFGR.2000.840615","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840615","url":null,"abstract":"Detecting faces in images with complex backgrounds is a difficult task. Our approach, which obtains state-of-the-art results, is based on a generative neural network model: the constrained generative model (CGM). To detect side-view faces and to decrease the number of false alarms, a conditional mixture of networks is used. To decrease the computational time cost, a fast search algorithm is proposed. The level of performance reached, in terms of detection accuracy and processing time, allows us to apply this detector to a real-world application: the indexation of face images on the WWW.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123849727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840644
S. Eickeler, Mirco Jabs, G. Rigoll
This paper compares different confidence measures for the results of statistical face recognition systems. The main applications of a confidence measure are rejection of unknown people and the detection of recognition errors. Some of the confidence measures are based on the posterior probability and some on the ranking of the recognition results. The posterior probability is calculated by applying Bayes' rule with different ways to approximate the unconditional likelihood. The confidence measure based on the ranking is a new method. Experiments to evaluate the confidence measures are carried out on a pseudo 2D hidden Markov model-based face recognition system and the Bochum face database.
{"title":"Comparison of confidence measures for face recognition","authors":"S. Eickeler, Mirco Jabs, G. Rigoll","doi":"10.1109/AFGR.2000.840644","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840644","url":null,"abstract":"This paper compares different confidence measures for the results of statistical face recognition systems. The main applications of a confidence measure are rejection of unknown people and the detection of recognition errors. Some of the confidence measures are based on the posterior probability and some on the ranking of the recognition results. The posterior probability is calculated by applying Bayes' rule with different ways to approximate the unconditional likelihood. The confidence measure based on the ranking is a new method. Experiments to evaluate the confidence measures are carried out on a pseudo 2D hidden Markov model-based face recognition system and the Bochum face database.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129739061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840610
S. Kawato, J. Ohya
Among head gestures, nodding and head-shaking are very common and used often. Thus the detection of such gestures is basic to a visual understanding of human responses. However it is difficult to detect them in real-time, because nodding and head-shaking are fairly small and fast head movements. We propose an approach for detecting nodding and head-shaking in real time from a single color video stream by directly detecting and tracking a point between the eyes, or what we call the "between-eyes". Along a circle of a certain radius centered at the "between-eyes", the pixel value has two cycles of bright parts (forehead and nose bridge) and dark parts (eyes and brows). The output of the proposed circle-frequency filter has a local maximum at these characteristic points. To distinguish the true "between-eyes" from similar characteristic points in other face parts, we do a confirmation with eye detection. Once the "between-eyes" is detected, a small area around it is copied as a template and the system enters the tracking mode. Combining with the circle-frequency filtering and the template, the tracking is done not by searching around but by selecting candidates using the template; the template is then updated. Due to this special tracking algorithm, the system can track the "between-eyes" stably and accurately. It runs at 13 frames/s rate without special hardware. By analyzing the movement of the point, we can detect nodding and head-shaking. Some experimental results are shown.
{"title":"Real-time detection of nodding and head-shaking by directly detecting and tracking the \"between-eyes\"","authors":"S. Kawato, J. Ohya","doi":"10.1109/AFGR.2000.840610","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840610","url":null,"abstract":"Among head gestures, nodding and head-shaking are very common and used often. Thus the detection of such gestures is basic to a visual understanding of human responses. However it is difficult to detect them in real-time, because nodding and head-shaking are fairly small and fast head movements. We propose an approach for detecting nodding and head-shaking in real time from a single color video stream by directly detecting and tracking a point between the eyes, or what we call the \"between-eyes\". Along a circle of a certain radius centered at the \"between-eyes\", the pixel value has two cycles of bright parts (forehead and nose bridge) and dark parts (eyes and brows). The output of the proposed circle-frequency filter has a local maximum at these characteristic points. To distinguish the true \"between-eyes\" from similar characteristic points in other face parts, we do a confirmation with eye detection. Once the \"between-eyes\" is detected, a small area around it is copied as a template and the system enters the tracking mode. Combining with the circle-frequency filtering and the template, the tracking is done not by searching around but by selecting candidates using the template; the template is then updated. Due to this special tracking algorithm, the system can track the \"between-eyes\" stably and accurately. It runs at 13 frames/s rate without special hardware. By analyzing the movement of the point, we can detect nodding and head-shaking. Some experimental results are shown.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114357078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840662
C. Wren, B. Clarkson, A. Pentland
Human motion can be understood on many levels. The most basic level is the notion that humans are collections of things that have predictable visual appearance. Next is the notion that humans exist in a physical universe, as a consequence of this, a large part of human motion can be modeled and predicted with the laws of physics. Finally there is the notion that humans utilize muscles to actively shape purposeful motion. We employ a recursive framework for real-time, 3D tracking of human motion that enables pixel-level, probabilistic processes to take advantage of the contextual knowledge encoded in the higher-level models, including models of dynamic constraints on human motion. We show that models of purposeful action arise naturally from this framework, and further, that those models can be used to improve the perception of human motion. Results are shown that demonstrate automatic discovery of features in this new feature space.
{"title":"Understanding purposeful human motion","authors":"C. Wren, B. Clarkson, A. Pentland","doi":"10.1109/AFGR.2000.840662","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840662","url":null,"abstract":"Human motion can be understood on many levels. The most basic level is the notion that humans are collections of things that have predictable visual appearance. Next is the notion that humans exist in a physical universe, as a consequence of this, a large part of human motion can be modeled and predicted with the laws of physics. Finally there is the notion that humans utilize muscles to actively shape purposeful motion. We employ a recursive framework for real-time, 3D tracking of human motion that enables pixel-level, probabilistic processes to take advantage of the contextual knowledge encoded in the higher-level models, including models of dynamic constraints on human motion. We show that models of purposeful action arise naturally from this framework, and further, that those models can be used to improve the perception of human motion. Results are shown that demonstrate automatic discovery of features in this new feature space.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133011684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840661
H. Kjellström, F. D. L. Torre, Michael J. Black
This paper describes a framework for constructing a linear subspace model of image appearance for complex articulated 3D figures such as humans and other animals. A commercial motion capture system provides 3D data that is aligned with images of subjects performing various activities. Portions of a limb's image appearance are seen from multiple views and for multiple subjects. From these partial views, weighted principal component analysis is used to construct a linear subspace representation of the "unwrapped" image appearance of each limb. The linear subspaces provide a generative model of the object appearance that is exploited in a Bayesian particle filtering tracking system. Results of tracking single limbs and walking humans are presented.
{"title":"A framework for modeling the appearance of 3D articulated figures","authors":"H. Kjellström, F. D. L. Torre, Michael J. Black","doi":"10.1109/AFGR.2000.840661","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840661","url":null,"abstract":"This paper describes a framework for constructing a linear subspace model of image appearance for complex articulated 3D figures such as humans and other animals. A commercial motion capture system provides 3D data that is aligned with images of subjects performing various activities. Portions of a limb's image appearance are seen from multiple views and for multiple subjects. From these partial views, weighted principal component analysis is used to construct a linear subspace representation of the \"unwrapped\" image appearance of each limb. The linear subspaces provide a generative model of the object appearance that is exploited in a Bayesian particle filtering tracking system. Results of tracking single limbs and walking humans are presented.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127856275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840616
Simon Baker, T. Kanade
Faces often appear very small in surveillance imagery because of the wide fields of view that are typically used and the relatively large distance between the cameras and the scene. For tasks such as face recognition, resolution enhancement techniques are therefore generally needed. Although numerous resolution enhancement algorithms have been proposed in the literature, most of them are limited by the fact that they make weak, if any, assumptions about the scene. We propose an algorithm to learn a prior on the spatial distribution of the image gradient for frontal images of faces. We proceed to show how such a prior can be incorporated into a resolution enhancement algorithm to yield 4- to 8-fold improvements in resolution (i.e., 16 to 64 times as many pixels). The additional pixels are, in effect, hallucinated.
{"title":"Hallucinating faces","authors":"Simon Baker, T. Kanade","doi":"10.1109/AFGR.2000.840616","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840616","url":null,"abstract":"Faces often appear very small in surveillance imagery because of the wide fields of view that are typically used and the relatively large distance between the cameras and the scene. For tasks such as face recognition, resolution enhancement techniques are therefore generally needed. Although numerous resolution enhancement algorithms have been proposed in the literature, most of them are limited by the fact that they make weak, if any, assumptions about the scene. We propose an algorithm to learn a prior on the spatial distribution of the image gradient for frontal images of faces. We proceed to show how such a prior can be incorporated into a resolution enhancement algorithm to yield 4- to 8-fold improvements in resolution (i.e., 16 to 64 times as many pixels). The additional pixels are, in effect, hallucinated.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134504098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840657
G. Rigoll, S. Eickeler, Stefan Müller
This paper presents a novel approach to robust and flexible person tracking using an algorithm that combines two powerful stochastic modeling techniques: pseudo-2D hidden Markov models (P2DHMM) used for capturing the shape of a person within an image frame, and the well-known Kalman-filtering algorithm, that uses the output of the P2DHMM for tracking the person by estimation of a bounding box trajectory indicating the location of the person within the entire video sequence. Both algorithms cooperate together in an optimal way, and with this co-operative feedback, the proposed approach even makes the tracking of people possible in the presence of background motions caused by moving objects or by camera operations as, e.g., panning or zooming. Our results are confirmed by several tracking examples in real scenarios, shown at the end of the paper and provided on the Web server of our institute.
{"title":"Person tracking in real-world scenarios using statistical methods","authors":"G. Rigoll, S. Eickeler, Stefan Müller","doi":"10.1109/AFGR.2000.840657","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840657","url":null,"abstract":"This paper presents a novel approach to robust and flexible person tracking using an algorithm that combines two powerful stochastic modeling techniques: pseudo-2D hidden Markov models (P2DHMM) used for capturing the shape of a person within an image frame, and the well-known Kalman-filtering algorithm, that uses the output of the P2DHMM for tracking the person by estimation of a bounding box trajectory indicating the location of the person within the entire video sequence. Both algorithms cooperate together in an optimal way, and with this co-operative feedback, the proposed approach even makes the tracking of people possible in the presence of background motions caused by moving objects or by camera operations as, e.g., panning or zooming. Our results are confirmed by several tracking examples in real scenarios, shown at the end of the paper and provided on the Web server of our institute.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115393313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840611
T. Kanade, Ying-li Tian, J. Cohn
Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive testbed to date for comparative studies of facial expression analysis.
{"title":"Comprehensive database for facial expression analysis","authors":"T. Kanade, Ying-li Tian, J. Cohn","doi":"10.1109/AFGR.2000.840611","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840611","url":null,"abstract":"Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive testbed to date for comparative studies of facial expression analysis.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"355 14-15","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120931368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840643
J. Weng, C. Evans, Wey-Shiuan Hwang
The current technology in computer vision requires humans to collect images, store images, segment images for computers and train computer recognition systems using these images. It is unlikely that such a manual labor process can meet the demands of many challenging recognition tasks. Our goal is to enable machines to learn directly from sensory input streams while interacting with the environment including human teachers. We propose a new technique which incrementally derives discriminating features in the input space. Virtual labels are formed by clustering in the output space to extract discriminating features in the input space. We organize the resulting discriminating subspace in a coarse-to-fine fashion and store the information in a decision tree. Such an incremental hierarchical discriminating regression (IHDR) decision tree can be modeled by a hierarchical probability distribution model. We demonstrate the performance of the algorithm on the problem of face recognition using video sequences of 33889 frames in length from 143 different subjects. A correct recognition rate of 95.1% has been achieved.
{"title":"An incremental learning method for face recognition under continuous video stream","authors":"J. Weng, C. Evans, Wey-Shiuan Hwang","doi":"10.1109/AFGR.2000.840643","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840643","url":null,"abstract":"The current technology in computer vision requires humans to collect images, store images, segment images for computers and train computer recognition systems using these images. It is unlikely that such a manual labor process can meet the demands of many challenging recognition tasks. Our goal is to enable machines to learn directly from sensory input streams while interacting with the environment including human teachers. We propose a new technique which incrementally derives discriminating features in the input space. Virtual labels are formed by clustering in the output space to extract discriminating features in the input space. We organize the resulting discriminating subspace in a coarse-to-fine fashion and store the information in a decision tree. Such an incremental hierarchical discriminating regression (IHDR) decision tree can be modeled by a hierarchical probability distribution model. We demonstrate the performance of the algorithm on the problem of face recognition using video sequences of 33889 frames in length from 143 different subjects. A correct recognition rate of 95.1% has been achieved.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116626421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-03-26DOI: 10.1109/AFGR.2000.840686
Andrew Wu, M. Shah, N. Lobo
We present a method for tracking the 3D position of a finger, using a single camera placed several meters away from the user. After skin detection, we use motion to identify the gesticulating arm. The finger point is found by analyzing the arm's outline. To derive a 3D trajectory, we first track 2D positions of the user's elbow and shoulder. Given that a human's upper arm and lower arm have consistent length, we observe that the possible locations of a finger and elbow form two spheres with constant radii. From the previously tracked body points, we can reconstruct these spheres, computing the 3D position of the elbow and finger. These steps are fully automated and do not require human intervention. The system presented can be used as a visualization tool, or as a user input interface, in cases when the user would rather not be constrained by the camera system.
{"title":"A virtual 3D blackboard: 3D finger tracking using a single camera","authors":"Andrew Wu, M. Shah, N. Lobo","doi":"10.1109/AFGR.2000.840686","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840686","url":null,"abstract":"We present a method for tracking the 3D position of a finger, using a single camera placed several meters away from the user. After skin detection, we use motion to identify the gesticulating arm. The finger point is found by analyzing the arm's outline. To derive a 3D trajectory, we first track 2D positions of the user's elbow and shoulder. Given that a human's upper arm and lower arm have consistent length, we observe that the possible locations of a finger and elbow form two spheres with constant radii. From the previously tracked body points, we can reconstruct these spheres, computing the 3D position of the elbow and finger. These steps are fully automated and do not require human intervention. The system presented can be used as a visualization tool, or as a user input interface, in cases when the user would rather not be constrained by the camera system.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127555462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}