Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10082
Christopher Geyer, Kostas Daniilidis
A pair of stereo images are said to be rectified if corresponding image points have the same y-coordinate in their respective images. In this paper we consider the rectification of two omnidirectional cameras, specifically two parabolic catadioptric cameras. Such systems consist of a parabolic mirror and an orthographically projecting lens. We show that if the image coordinates are represented as a point z in the complex plane, then the rectification is specified by coth -1z. This rectification is shown to be conformal, in that it is locally distortionless, and furthermore, it is unique up to scale and transformation. We show an experiment in which two real images have been rectified and a stereo matching performed.
{"title":"Conformal Rectification of Omnidirectional Stereo Pairs","authors":"Christopher Geyer, Kostas Daniilidis","doi":"10.1109/CVPRW.2003.10082","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10082","url":null,"abstract":"A pair of stereo images are said to be rectified if corresponding image points have the same y-coordinate in their respective images. In this paper we consider the rectification of two omnidirectional cameras, specifically two parabolic catadioptric cameras. Such systems consist of a parabolic mirror and an orthographically projecting lens. We show that if the image coordinates are represented as a point z in the complex plane, then the rectification is specified by coth -1z. This rectification is shown to be conformal, in that it is locally distortionless, and furthermore, it is unique up to scale and transformation. We show an experiment in which two real images have been rectified and a stereo matching performed.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"49 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120889035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10027
W. Alemu, S. Fuchs
Amharic, a working language in Ethiopia, has its own writing system which is totally different from that of the Latin alphabet based languages. Amharic handwriting recognition is challenging due to the huge number of symbols, significant interclass similarity and also intra-class variability. In this paper the application of Hidden Markov Random Field (HMRF) for handwriting recognition of the legal amount field of Amharic bank check is presented. The three main contributions of this paper are the following. First, a new feature extraction technique is used which tries to extract natural features as perceived by human beings. The features extracted by this technique show a significant performance improvement. Second, a classification technique by estimating likelihood using a method known as pseudo-marginal probability is developed. The third contribution is the application of contextual information based on the syntactical structure of Amharic checks. Such context information is important in recognition process because even humans fail to recognize symbols correctly without any context. A noticeable difference is observed between results obtained with and without the application of contextual information. On the whole, despite the huge interclass similarity and also intra-class variability of handwritten Amharic characters, attractive results are found.
{"title":"Handwritten Amharic Bank Check Recognition Using Hidden Markov Random Field","authors":"W. Alemu, S. Fuchs","doi":"10.1109/CVPRW.2003.10027","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10027","url":null,"abstract":"Amharic, a working language in Ethiopia, has its own writing system which is totally different from that of the Latin alphabet based languages. Amharic handwriting recognition is challenging due to the huge number of symbols, significant interclass similarity and also intra-class variability. In this paper the application of Hidden Markov Random Field (HMRF) for handwriting recognition of the legal amount field of Amharic bank check is presented. The three main contributions of this paper are the following. First, a new feature extraction technique is used which tries to extract natural features as perceived by human beings. The features extracted by this technique show a significant performance improvement. Second, a classification technique by estimating likelihood using a method known as pseudo-marginal probability is developed. The third contribution is the application of contextual information based on the syntactical structure of Amharic checks. Such context information is important in recognition process because even humans fail to recognize symbols correctly without any context. A noticeable difference is observed between results obtained with and without the application of contextual information. On the whole, despite the huge interclass similarity and also intra-class variability of handwritten Amharic characters, attractive results are found.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132325846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10073
Dennis W. Strelow, Sanjiv Singh
Two approaches to improving the accuracy of camera motion estimation from image sequences are the use of omnidirectional cameras, which combine a conventional camera with a convex mirror that magnifies the field of view, and the use of both image and inertial measurements, which are highly complementary. In this paper, we describe optimal batch algorithms for estimating motion and scene structure from either conventional or omnidirectional images, with or without inertial data. We also present a method for motion estimation from inertial data and the tangential components of image projections. Tangential components are identical across a wide range of conventional and omnidirectional projection models, so the resulting method does not require any accurate projection model. Because this method discards half of the projection data (i.e., the radial components) and can operate with a projection model that may grossly mismodel the actual camera behavior, we call the method "reckless" motion estimation, but we show that the camera positions and scene structure estimated using this method can be quite accurate.
{"title":"Reckless motion estimation from omnidirectional image and inertial measurements","authors":"Dennis W. Strelow, Sanjiv Singh","doi":"10.1109/CVPRW.2003.10073","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10073","url":null,"abstract":"Two approaches to improving the accuracy of camera motion estimation from image sequences are the use of omnidirectional cameras, which combine a conventional camera with a convex mirror that magnifies the field of view, and the use of both image and inertial measurements, which are highly complementary. In this paper, we describe optimal batch algorithms for estimating motion and scene structure from either conventional or omnidirectional images, with or without inertial data. We also present a method for motion estimation from inertial data and the tangential components of image projections. Tangential components are identical across a wide range of conventional and omnidirectional projection models, so the resulting method does not require any accurate projection model. Because this method discards half of the projection data (i.e., the radial components) and can operate with a projection model that may grossly mismodel the actual camera behavior, we call the method \"reckless\" motion estimation, but we show that the camera positions and scene structure estimated using this method can be quite accurate.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114408773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10064
M. Eriksson, S. Carlsson
The use of prior information by learning from training data is used increasingly in image analysis and computer vision. The high dimensionality of the parameter spaces and the complexity of the probability distributions however often makes the exact learning of priors an impossible problem, requiring an excessive amount of training data that is seldom realizable in practise. In this paper we propose a weaker form of prior estimation which tries to learn the boundaries of impossible events from examples. This is equivalent to estimating the support of the prior distribution or the manifold of possible events. The idea is to model the set of possible events by algebraic inequalities. Learning proceeds by selecting those inequalities that show a consistent sign when applied to the training data set. Every such inequality "carves" out a region of impossible events in the parameter space. The manifold of possible events estimated in this way will in general represent the qualitative properties of the events. We give example of this in the problems of restoration of handwritten characters and automatically tracked body locations
{"title":"Carving Prior Manifolds Using Inequalities","authors":"M. Eriksson, S. Carlsson","doi":"10.1109/CVPRW.2003.10064","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10064","url":null,"abstract":"The use of prior information by learning from training data is used increasingly in image analysis and computer vision. The high dimensionality of the parameter spaces and the complexity of the probability distributions however often makes the exact learning of priors an impossible problem, requiring an excessive amount of training data that is seldom realizable in practise. In this paper we propose a weaker form of prior estimation which tries to learn the boundaries of impossible events from examples. This is equivalent to estimating the support of the prior distribution or the manifold of possible events. The idea is to model the set of possible events by algebraic inequalities. Learning proceeds by selecting those inequalities that show a consistent sign when applied to the training data set. Every such inequality \"carves\" out a region of impossible events in the parameter space. The manifold of possible events estimated in this way will in general represent the qualitative properties of the events. We give example of this in the problems of restoration of handwritten characters and automatically tracked body locations","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114615434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10010
Kyoungju Park, A. Nowell, Dimitris N. Metaxas
This paper introduces a method to measure the average shape of handaxes, and characterize deviations from this average shape by taking into account both internal and external information. In the field of Paleolithic archaeology, standardization and symmetry can be two important concepts. For axially symmetrical shapes such as handaxes, it is possible to introduce a simple appropriate shape representation. We adapt a parameterized deformable model based approach to allow flexibility of shape coverage and analyze the similarity with a few compact parameters. Moreover a hierarchical fitting method ensures stability while measuring global and local shape features step-by-step. Our model incorporates a physics-based framework so as to deform due to forces exerted from boundary data sets.
{"title":"Deformable Model Based Shape Analysis Stone Tool Application","authors":"Kyoungju Park, A. Nowell, Dimitris N. Metaxas","doi":"10.1109/CVPRW.2003.10010","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10010","url":null,"abstract":"This paper introduces a method to measure the average shape of handaxes, and characterize deviations from this average shape by taking into account both internal and external information. In the field of Paleolithic archaeology, standardization and symmetry can be two important concepts. For axially symmetrical shapes such as handaxes, it is possible to introduce a simple appropriate shape representation. We adapt a parameterized deformable model based approach to allow flexibility of shape coverage and analyze the similarity with a few compact parameters. Moreover a hierarchical fitting method ensures stability while measuring global and local shape features step-by-step. Our model incorporates a physics-based framework so as to deform due to forces exerted from boundary data sets.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134339355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10024
Jun Sun, Yukata Katsuyama, S. Naoi
E-learning has received more and more attention in recent years. The abundant text information in E-learning videos is very valuable for information indexing, searching and other applications. In order to effectively extract the text from E-learning videos, a text processing method is proposed in this paper. The method is composed of two parts: text change frame detection and text extraction from image. The purpose of text change frame detection is to remove the redundant frames from the video and reduce the total processing time. A new text extraction algorithm is proposed to extract the text areas in the text change frames for further recognition. Experiments on lecture video manifest the good performance of our method.
{"title":"Text Processing Method for E-learning Videos","authors":"Jun Sun, Yukata Katsuyama, S. Naoi","doi":"10.1109/CVPRW.2003.10024","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10024","url":null,"abstract":"E-learning has received more and more attention in recent years. The abundant text information in E-learning videos is very valuable for information indexing, searching and other applications. In order to effectively extract the text from E-learning videos, a text processing method is proposed in this paper. The method is composed of two parts: text change frame detection and text extraction from image. The purpose of text change frame detection is to remove the redundant frames from the video and reduce the total processing time. A new text extraction algorithm is proposed to extract the text areas in the text change frames for further recognition. Experiments on lecture video manifest the good performance of our method.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134051972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10007
M. Kampel, Robert Sablatnig
A major obstacle to the broader use of 3D object reconstruction and modeling is the extent of manual intervention needed. Such interventions are currently extensive and exist throughout every phase of a 3D reconstruction project: collection of images, image management, establishment of sensor position and image orientation, extracting the geometric information describing an object, and merging geometric, texture and semantic data. We present a fully automated approach to pottery reconstruction based on the fragment profile, which is the cross-section of the fragment in the direction of the rotational axis of symmetry. We demonstrate the method and give results on synthetic and real data.
{"title":"Profile-based Pottery Reconstruction","authors":"M. Kampel, Robert Sablatnig","doi":"10.1109/CVPRW.2003.10007","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10007","url":null,"abstract":"A major obstacle to the broader use of 3D object reconstruction and modeling is the extent of manual intervention needed. Such interventions are currently extensive and exist throughout every phase of a 3D reconstruction project: collection of images, image management, establishment of sensor position and image orientation, extracting the geometric information describing an object, and merging geometric, texture and semantic data. We present a fully automated approach to pottery reconstruction based on the fragment profile, which is the cross-section of the fragment in the direction of the rotational axis of symmetry. We demonstrate the method and give results on synthetic and real data.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133066414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10069
M. Srinivasan
Conventional mirrors for panoramic imaging usually capture circular images. As these images are difficult to interpret visually, they are often remapped digitally into a rectangular image in which one axis represents azimuth and the other elevation. This paper describes a class of mirrors that perform the capture as well as the remapping, thus eliminating the need for computational resources. They provide uniform resolution in azimuth and elevation, and can be designed to make full use of a camera's imaging surface.
{"title":"A New Class of Mirrors for Wide-Angle Imaging","authors":"M. Srinivasan","doi":"10.1109/CVPRW.2003.10069","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10069","url":null,"abstract":"Conventional mirrors for panoramic imaging usually capture circular images. As these images are difficult to interpret visually, they are often remapped digitally into a rectangular image in which one axis represents azimuth and the other elevation. This paper describes a class of mirrors that perform the capture as well as the remapping, thus eliminating the need for computational resources. They provide uniform resolution in azimuth and elevation, and can be designed to make full use of a camera's imaging surface.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133473625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10054
S. Yonemoto, R. Taniguchi
This paper describes a vision based 3D real-virtual interaction which enables realistic avatar motion control, and in which the virtual camera is controlled by the body posture of the user. The human motion analysis method is implemented by blob tracking. A physically-constrained motion synthesis method is implemented to generate realistic motion from a limit number of blobs. We address our framework to utilize virtual scene contexts as a priori knowledge. In order to make the virtual scene more realistically beyond the limitation of the real world sensing, we use a framework to augment the reality in the virtual scene by simulating various events of the real world. Concretely, we suppose that a virtual environment can provide action information for the avatar. 3rd-person viewpoint control coupled with body postures is also realized to directly access virtual objects.
{"title":"Virtual Scene Control Using Human Body Postures","authors":"S. Yonemoto, R. Taniguchi","doi":"10.1109/CVPRW.2003.10054","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10054","url":null,"abstract":"This paper describes a vision based 3D real-virtual interaction which enables realistic avatar motion control, and in which the virtual camera is controlled by the body posture of the user. The human motion analysis method is implemented by blob tracking. A physically-constrained motion synthesis method is implemented to generate realistic motion from a limit number of blobs. We address our framework to utilize virtual scene contexts as a priori knowledge. In order to make the virtual scene more realistically beyond the limitation of the real world sensing, we use a framework to augment the reality in the virtual scene by simulating various events of the real world. Concretely, we suppose that a virtual environment can provide action information for the avatar. 3rd-person viewpoint control coupled with body postures is also realized to directly access virtual objects.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"234 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125462225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10057
M. Bartlett, G. Littlewort, Ian R. Fasel, J. Movellan
Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive. The system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques [15, 2]. The expression recognizer receives image patches located by the face detector. A Gabor representation of the patch is formed and then processed by a bank of SVM classifiers. A novel combination of Adaboost and SVM's enhances performance. The system was tested on the Cohn-Kanade dataset of posed facial expressions [6]. The generalization performance to new subjects for a 7- way forced choice correct. Most interestingly the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system has been deployed on a wide variety of platforms including Sony's Aibo pet robot, ATR's RoboVie, and CU animator, and is currently being evaluated for applications including automatic reading tutors, assessment of human-robot interaction.
{"title":"Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction.","authors":"M. Bartlett, G. Littlewort, Ian R. Fasel, J. Movellan","doi":"10.1109/CVPRW.2003.10057","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10057","url":null,"abstract":"Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive. The system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques [15, 2]. The expression recognizer receives image patches located by the face detector. A Gabor representation of the patch is formed and then processed by a bank of SVM classifiers. A novel combination of Adaboost and SVM's enhances performance. The system was tested on the Cohn-Kanade dataset of posed facial expressions [6]. The generalization performance to new subjects for a 7- way forced choice correct. Most interestingly the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system has been deployed on a wide variety of platforms including Sony's Aibo pet robot, ATR's RoboVie, and CU animator, and is currently being evaluated for applications including automatic reading tutors, assessment of human-robot interaction.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130227707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}