We report an improved methodology for training classifiers for document image content extraction, that is, the location and segmentation of regions containing handwriting, machine-printed text, photographs, blank space, etc. Our previous methods classified each individual pixel separately (rather than regions): this avoids the arbitrariness and restrictiveness that result from constraining region shapes (to, e.g., rectangles). However, this policy also allows content classes to vary frequently within small regions, often yielding areas where several content classes are mixed together. This does not reflect the way that real content is organized: typically almost all small local regions are of uniform class. This observation suggested a post-classification methodology which enforces local uniformity without imposing a restricted class of region shapes. We choose features extracted from small local regions (e.g. 4-5 pixels radius) with which we train classifiers that operate on the output of previous classifiers, guided by ground truth. This provides a sequence of post-classifiers, each trained separately on the results of the previous classifier. Experiments on a highly diverse test set of 83 document images show that this method reduces per-pixel classification errors by 23%, and it dramatically increases the occurrence of large contiguous regions of uniform class, thus providing highly usable near-solid 'masks' with which to segment the images into distinct classes. It continues to allow a wide range of complex, non-rectilinear region shapes.
{"title":"Iterated Document Content Classification","authors":"Chang An, H. Baird, Pingping Xiu","doi":"10.1109/ICDAR.2007.148","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.148","url":null,"abstract":"We report an improved methodology for training classifiers for document image content extraction, that is, the location and segmentation of regions containing handwriting, machine-printed text, photographs, blank space, etc. Our previous methods classified each individual pixel separately (rather than regions): this avoids the arbitrariness and restrictiveness that result from constraining region shapes (to, e.g., rectangles). However, this policy also allows content classes to vary frequently within small regions, often yielding areas where several content classes are mixed together. This does not reflect the way that real content is organized: typically almost all small local regions are of uniform class. This observation suggested a post-classification methodology which enforces local uniformity without imposing a restricted class of region shapes. We choose features extracted from small local regions (e.g. 4-5 pixels radius) with which we train classifiers that operate on the output of previous classifiers, guided by ground truth. This provides a sequence of post-classifiers, each trained separately on the results of the previous classifier. Experiments on a highly diverse test set of 83 document images show that this method reduces per-pixel classification errors by 23%, and it dramatically increases the occurrence of large contiguous regions of uniform class, thus providing highly usable near-solid 'masks' with which to segment the images into distinct classes. It continues to allow a wide range of complex, non-rectilinear region shapes.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130404835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prasanth, Jagadeesh Babu, Raghunath Sharma, Prabhakara Rao, Dinesh Mandalapu, L. Prasanth, V. Jagadeesh Babu, R. Raghunath, Sharma Dinesh, M. G. V. P. Rao
This paper describes character based elastic matching using local features for recognizing online handwritten data. Dynamic time warping (DTW) has been used with four different feature sets: x-y features, shape context (SC) and tangent angle (TA) features, generalized shape context feature (GSC) and the fourth set containing x-y, normalized first and second derivatives and curvature features. Nearest neighborhood classifier with DTW distance was used as the classifier. In comparison, the SC and TA feature set was found to be the slowest and the fourth set was best among all in the recognition rate. The results have been compiled for the online handwritten Tamil and Telugu data. On Telugu data we obtained an accuracy of 90.6% with a speed of 0.166 symbols/sec. To increase the speed we have proposed a 2-stage recognition scheme using which we obtained accuracy of 89.77% but with a speed of 3.977 symbols/sec.
{"title":"Elastic Matching of Online Handwritten Tamil and Telugu Scripts Using Local Features","authors":"Prasanth, Jagadeesh Babu, Raghunath Sharma, Prabhakara Rao, Dinesh Mandalapu, L. Prasanth, V. Jagadeesh Babu, R. Raghunath, Sharma Dinesh, M. G. V. P. Rao","doi":"10.1109/ICDAR.2007.106","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.106","url":null,"abstract":"This paper describes character based elastic matching using local features for recognizing online handwritten data. Dynamic time warping (DTW) has been used with four different feature sets: x-y features, shape context (SC) and tangent angle (TA) features, generalized shape context feature (GSC) and the fourth set containing x-y, normalized first and second derivatives and curvature features. Nearest neighborhood classifier with DTW distance was used as the classifier. In comparison, the SC and TA feature set was found to be the slowest and the fourth set was best among all in the recognition rate. The results have been compiled for the online handwritten Tamil and Telugu data. On Telugu data we obtained an accuracy of 90.6% with a speed of 0.166 symbols/sec. To increase the speed we have proposed a 2-stage recognition scheme using which we obtained accuracy of 89.77% but with a speed of 3.977 symbols/sec.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129983384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Ringlstetter, Ulrich Reffle, Annette Gotscharek, K. Schulz
Most systems for correcting errors in texts make use of specific word distance measures such as the Levenshtein distance. In many experiments it has been shown that correction accuracy is improved when using edit weights that depend on the particular symbols of the edit operation. However, most proposed approaches so far rely on high amounts of training data where errors and their corrections are collected. In practice, the preparation of suitable ground truth data is often too costly, which means that uniform edit costs are used. In this paper we evaluate approaches for deriving symbol dependent edit weights that do not need any ground truth training data, comparing them with methods based on ground truth training. We suggest a new approach where special error dictionaries are used to estimate weights. The method is simple and very efficient, needing one pass of the document to be corrected. Our experiments with different OCR systems and textual data show that the method consistently improves correction accuracy in a significant way, often leading to results comparable to those achieved with ground truth training.
{"title":"Deriving Symbol Dependent Edit Weights for Text Correction_The Use of Error Dictionaries","authors":"Christoph Ringlstetter, Ulrich Reffle, Annette Gotscharek, K. Schulz","doi":"10.1109/ICDAR.2007.99","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.99","url":null,"abstract":"Most systems for correcting errors in texts make use of specific word distance measures such as the Levenshtein distance. In many experiments it has been shown that correction accuracy is improved when using edit weights that depend on the particular symbols of the edit operation. However, most proposed approaches so far rely on high amounts of training data where errors and their corrections are collected. In practice, the preparation of suitable ground truth data is often too costly, which means that uniform edit costs are used. In this paper we evaluate approaches for deriving symbol dependent edit weights that do not need any ground truth training data, comparing them with methods based on ground truth training. We suggest a new approach where special error dictionaries are used to estimate weights. The method is simple and very efficient, needing one pass of the document to be corrected. Our experiments with different OCR systems and textual data show that the method consistently improves correction accuracy in a significant way, often leading to results comparable to those achieved with ground truth training.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130971510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-23DOI: 10.1109/ICDAR.2007.4377032
J. Vargas-Bonilla, M. A. Ferrer-Ballester, C. Travieso-González, J. B. Alonso
The effect of changing the image resolution over an off-line signature verification system performance is analyzed. The geometrical features used for the system analyzed in this paper are based on two vectors which represent the envelope and the interior stroke distribution in polar and Cartesian coordinates. Image resolution is progressively diminished from an initial 600 ppp resolution till 45 ppp. The robustness of the analyzed system for random and simple forgeries is tested out with a hidden Markov model. The results show that 150 ppp offers a good trade-off between performance and image resolution for static features.
{"title":"Off-line Signature Verification System Performance against Image Acquisition Resolution","authors":"J. Vargas-Bonilla, M. A. Ferrer-Ballester, C. Travieso-González, J. B. Alonso","doi":"10.1109/ICDAR.2007.4377032","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.4377032","url":null,"abstract":"The effect of changing the image resolution over an off-line signature verification system performance is analyzed. The geometrical features used for the system analyzed in this paper are based on two vectors which represent the envelope and the interior stroke distribution in polar and Cartesian coordinates. Image resolution is progressively diminished from an initial 600 ppp resolution till 45 ppp. The robustness of the analyzed system for random and simple forgeries is tested out with a hidden Markov model. The results show that 150 ppp offers a good trade-off between performance and image resolution for static features.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131096448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-23DOI: 10.1109/ICDAR.2007.4377017
Utpal Garain, S. K. Parui, T. Paquet, L. Heutte
This paper presents a pioneering study on automatic dating of handwritten manuscripts. Analysis of handwriting style forms the core of the dating method. Initially, it is hypothesized that a manuscript can be dated, to a certain level of accuracy, by looking at the way it is written. The hypothesis is then verified with real samples of known dates. A general framework is proposed for machine dating of handwritten manuscripts. Experiments on a database containing manuscripts of Gustave Flaubert (1821- 1880), the famous French novelist reports about 62% accuracy when manuscripts are dated within a range of five calendar years with respect to their exact year of writing.
{"title":"Machine Dating of Handwritten Manuscripts","authors":"Utpal Garain, S. K. Parui, T. Paquet, L. Heutte","doi":"10.1109/ICDAR.2007.4377017","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.4377017","url":null,"abstract":"This paper presents a pioneering study on automatic dating of handwritten manuscripts. Analysis of handwriting style forms the core of the dating method. Initially, it is hypothesized that a manuscript can be dated, to a certain level of accuracy, by looking at the way it is written. The hypothesis is then verified with real samples of known dates. A general framework is proposed for machine dating of handwritten manuscripts. Experiments on a database containing manuscripts of Gustave Flaubert (1821- 1880), the famous French novelist reports about 62% accuracy when manuscripts are dated within a range of five calendar years with respect to their exact year of writing.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122430564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ming Ye, Paul A. Viola, Sashi Raghupathy, H. Sutanto, Chengyang Li
This paper proposes a machine learning approach to grouping problems in ink parsing. Starting from an initial segmentation, hypotheses are generated by perturbing local configurations and processed in a high-confidence-first fashion, where the confidence of each hypothesis is produced by a data-driven AdaBoost decision-tree classifier with a set of intuitive features. This framework has successfully applied to grouping text lines and regions in complex freeform digital ink notes from real TabletPC users. It holds great potential in solving many other grouping problems in the ink parsing and document image analysis domains.
{"title":"Learning to Group Text Lines and Regions in Freeform Handwritten Notes","authors":"Ming Ye, Paul A. Viola, Sashi Raghupathy, H. Sutanto, Chengyang Li","doi":"10.1109/ICDAR.2007.159","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.159","url":null,"abstract":"This paper proposes a machine learning approach to grouping problems in ink parsing. Starting from an initial segmentation, hypotheses are generated by perturbing local configurations and processed in a high-confidence-first fashion, where the confidence of each hypothesis is produced by a data-driven AdaBoost decision-tree classifier with a set of intuitive features. This framework has successfully applied to grouping text lines and regions in complex freeform digital ink notes from real TabletPC users. It holds great potential in solving many other grouping problems in the ink parsing and document image analysis domains.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123107893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most studies about writer identification are based on English documents and to our knowledge no research has been reported on Farsi or Arabic documents. In this paper, we have proposed a method for off-line writer identification and verification based on Farsi handwriting, which is text-dependent. Based on the idea that has been presented in the previous studies, here we assume handwriting as texture image and after normalization step, the Gabor filters are applied to image and then new features are extracted. Substantially, the property of proposed method is using of the bank of Gabor filters which is appropriate for the structure of Farsi handwritten texts and vision system. Also, a new method for feature extraction from output of Gabor filters is proposed which is based on moments and nonlinear transform. In this paper, with definition a confidence criterion, a new method for writer verification is proposed. Evaluation of other methods and proposed method demonstrates that proposed method achieves better performance on Farsi handwritten from 40 peoples.
{"title":"A New Method for Writer Identification and Verification Based on Farsi/Arabic Handwritten Texts","authors":"F. Nejad, M. Rahmati","doi":"10.1109/ICDAR.2007.24","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.24","url":null,"abstract":"Most studies about writer identification are based on English documents and to our knowledge no research has been reported on Farsi or Arabic documents. In this paper, we have proposed a method for off-line writer identification and verification based on Farsi handwriting, which is text-dependent. Based on the idea that has been presented in the previous studies, here we assume handwriting as texture image and after normalization step, the Gabor filters are applied to image and then new features are extracted. Substantially, the property of proposed method is using of the bank of Gabor filters which is appropriate for the structure of Farsi handwritten texts and vision system. Also, a new method for feature extraction from output of Gabor filters is proposed which is based on moments and nonlinear transform. In this paper, with definition a confidence criterion, a new method for writer verification is proposed. Evaluation of other methods and proposed method demonstrates that proposed method achieves better performance on Farsi handwritten from 40 peoples.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Logical entity recognition in heterogeneous collections of document page images remains a challenging problem since the performance of traditional supervised methods degrades dramatically in case of many distinct layout styles. In this paper we present an unsupervised method where layout style information is explicitly used in both training and recognition phases. We represent the layout style, local features, and logical labels of physical regions of a document compactly by an ordered labeled X-Y tree. Style dissimilarity of two document pages is represented by the distance between their respective trees. During the training phase, document pages with true logical labels in training set are classified into distinct layout styles by unsupervised clustering. During the recognition phase, the layout style and logical entities of an input document are recognized simultaneously by matching the input tree to the trees in closest- matched layout style cluster of training set. Experimental results show that our algorithm is robust with both balanced and unbalanced style cluster sizes, zone over-segmentation, zone length variation, and variation in tree representations of the same layout style.
{"title":"Simultaneous Layout Style and Logical Entity Recognition in a Heterogeneous Collection of Documents","authors":"Siyuan Chen, Song Mao, G. Thoma","doi":"10.1109/ICDAR.2007.231","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.231","url":null,"abstract":"Logical entity recognition in heterogeneous collections of document page images remains a challenging problem since the performance of traditional supervised methods degrades dramatically in case of many distinct layout styles. In this paper we present an unsupervised method where layout style information is explicitly used in both training and recognition phases. We represent the layout style, local features, and logical labels of physical regions of a document compactly by an ordered labeled X-Y tree. Style dissimilarity of two document pages is represented by the distance between their respective trees. During the training phase, document pages with true logical labels in training set are classified into distinct layout styles by unsupervised clustering. During the recognition phase, the layout style and logical entities of an input document are recognized simultaneously by matching the input tree to the trees in closest- matched layout style cluster of training set. Experimental results show that our algorithm is robust with both balanced and unbalanced style cluster sizes, zone over-segmentation, zone length variation, and variation in tree representations of the same layout style.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127838342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel online overlapped handwriting recognition system for mobile devices such as cellular phones. Users can input characters continuously without pauses on the single writing area. It has three features: small writing area, quick response and direct operations with handwritten gestures. Therefore, it is suitable for mobile devices such as cellular phones. The system realizes a new handwriting interface similar to touch-typing. We evaluated the system by two experiments: character recognition performance and text entry speed of Japanese sentences. Through these experiments we showed the effectiveness of the proposed system.
{"title":"Text Input System Using Online Overlapped Handwriting Recognition for Mobile Devices","authors":"Yojiro Tonouchi, A. Kawamura","doi":"10.1109/ICDAR.2007.243","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.243","url":null,"abstract":"This paper proposes a novel online overlapped handwriting recognition system for mobile devices such as cellular phones. Users can input characters continuously without pauses on the single writing area. It has three features: small writing area, quick response and direct operations with handwritten gestures. Therefore, it is suitable for mobile devices such as cellular phones. The system realizes a new handwriting interface similar to touch-typing. We evaluated the system by two experiments: character recognition performance and text entry speed of Japanese sentences. Through these experiments we showed the effectiveness of the proposed system.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127756317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wafa Boussellaa, Abderrazak Zahour, B. Taconet, A. Alimi, A. BenAbdelhafid
This paper presents the new system PRAAD for preprocessing and analysis of Arabic historical documents. It is composed of two important parts: pre-processing and analysis of ancient documents. After digitization, the color or greyscale ancient documents images are distorted by the presence of strong background artefacts such as scan optical blur and noise, show-through and bleed-through effects and spots. In order to preserve and exploit this cultural heritage documents, we intend to create efficient tool that achieves restoration, binarisation, and analyses the document layout. The developed tool is done by adapting our expertise in document image processing of Arabic ancient documents, printed or manuscripts. The different functions of PRAAD system are tested on a set of Arabic ancient documents from the national library and the National Archives of Tunisia.
{"title":"PRAAD: Preprocessing and Analysis Tool for Arabic Ancient Documents","authors":"Wafa Boussellaa, Abderrazak Zahour, B. Taconet, A. Alimi, A. BenAbdelhafid","doi":"10.1109/ICDAR.2007.209","DOIUrl":"https://doi.org/10.1109/ICDAR.2007.209","url":null,"abstract":"This paper presents the new system PRAAD for preprocessing and analysis of Arabic historical documents. It is composed of two important parts: pre-processing and analysis of ancient documents. After digitization, the color or greyscale ancient documents images are distorted by the presence of strong background artefacts such as scan optical blur and noise, show-through and bleed-through effects and spots. In order to preserve and exploit this cultural heritage documents, we intend to create efficient tool that achieves restoration, binarisation, and analyses the document layout. The developed tool is done by adapting our expertise in document image processing of Arabic ancient documents, printed or manuscripts. The different functions of PRAAD system are tested on a set of Arabic ancient documents from the national library and the National Archives of Tunisia.","PeriodicalId":279268,"journal":{"name":"Ninth International Conference on Document Analysis and Recognition (ICDAR 2007)","volume":"10 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114038748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}