The problem of robust sparse coding is considered. It is defined as finding linear reconstruction coefficients that minimize the sum of absolute values of the errors, instead of the more typically used sum of squares of the errors. This change lowers the influence of large errors and enhances the robustness of the solution to noise in the data. Sparsity is enforced by limiting the sum of absolute values of the coefficients. We present an algorithm that finds the path traced by the coefficients when the sparsity-inducing constraint is varied. The optimality conditions are derived and included in the algorithm to speed its execution. The proposed method is validated on the problem of robust face recognition.
{"title":"Obtaining Full Regularization Paths for Robust Sparse Coding with Applications to Face Recognition","authors":"J. Chorowski, J. Zurada","doi":"10.1109/ICMLA.2012.66","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.66","url":null,"abstract":"The problem of robust sparse coding is considered. It is defined as finding linear reconstruction coefficients that minimize the sum of absolute values of the errors, instead of the more typically used sum of squares of the errors. This change lowers the influence of large errors and enhances the robustness of the solution to noise in the data. Sparsity is enforced by limiting the sum of absolute values of the coefficients. We present an algorithm that finds the path traced by the coefficients when the sparsity-inducing constraint is varied. The optimality conditions are derived and included in the algorithm to speed its execution. The proposed method is validated on the problem of robust face recognition.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117088421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lexical abstraction hierarchies can be leveraged to provide semantic information that characterizes features of text corpora as a whole. This information may be used to determine the classification utility of the dimensions that describe a dataset. This paper presents a new method for preparing a dataset for probabilistic classification by determining, a priori, the utility of a very small subset of taxonomically-related dimensions via a Discriminative Multinomial Naive Bayes process. We show that this method yields significant improvements over both Discriminative Multinomial Naive Bayes and Bayesian network classifiers alone.
{"title":"Taxonomic Dimensionality Reduction in Bayesian Text Classification","authors":"Richard A. McAllister, John W. Sheppard","doi":"10.1109/ICMLA.2012.93","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.93","url":null,"abstract":"Lexical abstraction hierarchies can be leveraged to provide semantic information that characterizes features of text corpora as a whole. This information may be used to determine the classification utility of the dimensions that describe a dataset. This paper presents a new method for preparing a dataset for probabilistic classification by determining, a priori, the utility of a very small subset of taxonomically-related dimensions via a Discriminative Multinomial Naive Bayes process. We show that this method yields significant improvements over both Discriminative Multinomial Naive Bayes and Bayesian network classifiers alone.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115478448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Criminal activity in virtual worlds is becoming a major problem for law enforcement agencies. Forensic investigators are becoming interested in being able to accurately and automatically track people in virtual communities. In this paper a set of algorithms capable of verification and recognition of avatar faces with high degree of accuracy are described. Results of experiments aimed at within-virtual-world avatar authentication and inter-reality-based scenarios of tracking a person between real and virtual worlds are reported. In the FERET-to-Avatar face dataset, where an avatar face was generated from every photo in the FERET database, a COTS FR algorithm achieved a near perfect 99.58% accuracy on 725 subjects. On a dataset of avatars from Second Life, the proposed avatar-to-avatar matching algorithm (which uses a fusion of local structural and appearance descriptors) achieved average true accept rates of (i) 96.33% using manual eye detection, and (ii) 86.5% in a fully automated mode at a false accept rate of 1.0%. A combination of the proposed face matcher and a state-of-the art commercial matcher (FaceVACS) resulted in further improvement on the inter-reality-based scenario.
{"title":"Face Recognition in the Virtual World: Recognizing Avatar Faces","authors":"Roman V. Yampolskiy, Brendan Klare, Anil K. Jain","doi":"10.1109/ICMLA.2012.16","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.16","url":null,"abstract":"Criminal activity in virtual worlds is becoming a major problem for law enforcement agencies. Forensic investigators are becoming interested in being able to accurately and automatically track people in virtual communities. In this paper a set of algorithms capable of verification and recognition of avatar faces with high degree of accuracy are described. Results of experiments aimed at within-virtual-world avatar authentication and inter-reality-based scenarios of tracking a person between real and virtual worlds are reported. In the FERET-to-Avatar face dataset, where an avatar face was generated from every photo in the FERET database, a COTS FR algorithm achieved a near perfect 99.58% accuracy on 725 subjects. On a dataset of avatars from Second Life, the proposed avatar-to-avatar matching algorithm (which uses a fusion of local structural and appearance descriptors) achieved average true accept rates of (i) 96.33% using manual eye detection, and (ii) 86.5% in a fully automated mode at a false accept rate of 1.0%. A combination of the proposed face matcher and a state-of-the art commercial matcher (FaceVACS) resulted in further improvement on the inter-reality-based scenario.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115485413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Healthcare is particularly rich in semantic information and background knowledge describing data. This paper discusses a number of semantic data types that can be found in healthcare data, presents how the semantics can be extracted from existing sources including the Unified Medical Language System (UMLS), discusses how the semantics can be used in both supervised and unsupervised learning, and presents an example rule learning system that implements several of these types. Results from three example applications in the healthcare domain are used to further exemplify semantic data types.
{"title":"Semantic Data Types in Machine Learning from Healthcare Data","authors":"Janusz Wojtusiak","doi":"10.1109/ICMLA.2012.41","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.41","url":null,"abstract":"Healthcare is particularly rich in semantic information and background knowledge describing data. This paper discusses a number of semantic data types that can be found in healthcare data, presents how the semantics can be extracted from existing sources including the Unified Medical Language System (UMLS), discusses how the semantics can be used in both supervised and unsupervised learning, and presents an example rule learning system that implements several of these types. Results from three example applications in the healthcare domain are used to further exemplify semantic data types.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122932925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a feature level fusion that is based on mapping the original low-level audio features to histogram descriptors. Our mapping is based on possibilistic membership functions and has two main components. The first one consists of clustering each set of features and identifying a set of representative prototypes. The second component uses the learned prototypes within membership functions to transform the original features into histograms. The mapping transforms features of different dimensions to histograms of fixed dimensions. This makes the fusion of multiple features less biased by the dimensionality and distributions of the different features. Using a standard collection of songs, we show that the transformed features provide higher classification accuracy than the original features. We also show that mapping simple low-level features and using a K-NN classifier provides results comparable to the state-of-the art.
{"title":"Feature Mapping and Fusion for Music Genre Classification","authors":"H. Balti, H. Frigui","doi":"10.1109/ICMLA.2012.59","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.59","url":null,"abstract":"We propose a feature level fusion that is based on mapping the original low-level audio features to histogram descriptors. Our mapping is based on possibilistic membership functions and has two main components. The first one consists of clustering each set of features and identifying a set of representative prototypes. The second component uses the learned prototypes within membership functions to transform the original features into histograms. The mapping transforms features of different dimensions to histograms of fixed dimensions. This makes the fusion of multiple features less biased by the dimensionality and distributions of the different features. Using a standard collection of songs, we show that the transformed features provide higher classification accuracy than the original features. We also show that mapping simple low-level features and using a K-NN classifier provides results comparable to the state-of-the art.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123929704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chris Sumner, A. Byers, Rachel Boochever, Gregory J. Park
Social media sites are now the most popular destination for Internet users, providing social scientists with a great opportunity to understand online behaviour. There are a growing number of research papers related to social media, a small number of which focus on personality prediction. To date, studies have typically focused on the Big Five traits of personality, but one area which is relatively unexplored is that of the anti-social traits of narcissism, Machiavellians and psychopathy, commonly referred to as the Dark Triad. This study explored the extent to which it is possible to determine anti-social personality traits based on Twitter use. This was performed by comparing the Dark Triad and Big Five personality traits of 2,927 Twitter users with their profile attributes and use of language. Analysis shows that there are some statistically significant relationships between these variables. Through the use of crowd sourced machine learning algorithms, we show that machine learning provides useful prediction rates, but is imperfect in predicting an individual's Dark Triad traits from Twitter activity. While predictive models may be unsuitable for predicting an individual's personality, they may still be of practical importance when models are applied to large groups of people, such as gaining the ability to see whether anti-social traits are increasing or decreasing over a population. Our results raise important questions related to the unregulated use of social media analysis for screening purposes. It is important that the practical and ethical implications of drawing conclusions about personal information embedded in social media sites are better understood.
{"title":"Predicting Dark Triad Personality Traits from Twitter Usage and a Linguistic Analysis of Tweets","authors":"Chris Sumner, A. Byers, Rachel Boochever, Gregory J. Park","doi":"10.1109/ICMLA.2012.218","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.218","url":null,"abstract":"Social media sites are now the most popular destination for Internet users, providing social scientists with a great opportunity to understand online behaviour. There are a growing number of research papers related to social media, a small number of which focus on personality prediction. To date, studies have typically focused on the Big Five traits of personality, but one area which is relatively unexplored is that of the anti-social traits of narcissism, Machiavellians and psychopathy, commonly referred to as the Dark Triad. This study explored the extent to which it is possible to determine anti-social personality traits based on Twitter use. This was performed by comparing the Dark Triad and Big Five personality traits of 2,927 Twitter users with their profile attributes and use of language. Analysis shows that there are some statistically significant relationships between these variables. Through the use of crowd sourced machine learning algorithms, we show that machine learning provides useful prediction rates, but is imperfect in predicting an individual's Dark Triad traits from Twitter activity. While predictive models may be unsuitable for predicting an individual's personality, they may still be of practical importance when models are applied to large groups of people, such as gaining the ability to see whether anti-social traits are increasing or decreasing over a population. Our results raise important questions related to the unregulated use of social media analysis for screening purposes. It is important that the practical and ethical implications of drawing conclusions about personal information embedded in social media sites are better understood.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126209517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Vieira, M. A. M. Cabral, T. V. Menezes, B. E. Silva, A. C. Lisboa
While there are many functions defined in the literature to measure the error magnitude (how much), the problem of dinning the spatial error (where) is not so well defined. For instance, in a given region it is expected a global growth in the electrical demand of 10MW. For the electrical system planning not only the amount but also the location must be considered. Predicting a growth of 10MW (how much) in the south (where) of a city would lead to complete different polices in terms of resources allocation (for instance a new substation) than predicting the same amount of 10MW in the north. Trying to cope with this difficulty, this paper proposes the concept of spatial error as the cost of transporting the surplus of one region to compensate another region deceit. This conceptual problem was written as an optimization transportation problem. This paper describes conceptually the difference between magnitude and spatial error measures and shows an algorithm to deal efficiently with the defined framework.
{"title":"Measuring the Spatial Error in Load Forecasting for Electrical Distribution Planning as a Problem of Transporting the Surplus to the In-Deficit Locations","authors":"D. Vieira, M. A. M. Cabral, T. V. Menezes, B. E. Silva, A. C. Lisboa","doi":"10.1109/ICMLA.2012.203","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.203","url":null,"abstract":"While there are many functions defined in the literature to measure the error magnitude (how much), the problem of dinning the spatial error (where) is not so well defined. For instance, in a given region it is expected a global growth in the electrical demand of 10MW. For the electrical system planning not only the amount but also the location must be considered. Predicting a growth of 10MW (how much) in the south (where) of a city would lead to complete different polices in terms of resources allocation (for instance a new substation) than predicting the same amount of 10MW in the north. Trying to cope with this difficulty, this paper proposes the concept of spatial error as the cost of transporting the surplus of one region to compensate another region deceit. This conceptual problem was written as an optimization transportation problem. This paper describes conceptually the difference between magnitude and spatial error measures and shows an algorithm to deal efficiently with the defined framework.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129667613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. B. Nassif, Luiz Fernando Capretz, D. Ho, Mohammad Azzeh
Software effort prediction is an important task in the software development life cycle. Many models including regression models, machine learning models, algorithmic models, expert judgment and estimation by analogy have been widely used to estimate software effort and cost. In this work, a Tree boost (Stochastic Gradient Boosting) model is put forward to predict software effort based on the Use Case Point method. The inputs of the model include software size in use case points, productivity and complexity. A multiple linear regression model was created and the Tree boost model was evaluated against the multiple linear regression model, as well as the use case point model by using four performance criteria: MMRE, PRED, MdMRE and MSE. Experiments show that the Tree boost model can be used with promising results to estimate software effort.
{"title":"A Treeboost Model for Software Effort Estimation Based on Use Case Points","authors":"A. B. Nassif, Luiz Fernando Capretz, D. Ho, Mohammad Azzeh","doi":"10.1109/ICMLA.2012.155","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.155","url":null,"abstract":"Software effort prediction is an important task in the software development life cycle. Many models including regression models, machine learning models, algorithmic models, expert judgment and estimation by analogy have been widely used to estimate software effort and cost. In this work, a Tree boost (Stochastic Gradient Boosting) model is put forward to predict software effort based on the Use Case Point method. The inputs of the model include software size in use case points, productivity and complexity. A multiple linear regression model was created and the Tree boost model was evaluated against the multiple linear regression model, as well as the use case point model by using four performance criteria: MMRE, PRED, MdMRE and MSE. Experiments show that the Tree boost model can be used with promising results to estimate software effort.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130568040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although search engines have deployed various techniques to detect and filter out Web spam, Web stammers continue to develop new tactics to influence the result of search engines ranking algorithms, for the purpose of obtaining an undeservedly high ranks. In this paper, we study the effect of the page language on the spam detection features. We examine how the distribution of a set of selected detection features changes according to the page language. Also, we study the effect of the page language on the detection rate of a given classifier using a selected set of detection features. The analysis results show that selecting suitable features for a classifier that segregates spam pages depends heavily on the language of the examined Web page, due in part to the different set of Web spam mechanisms used by each type of stammers.
{"title":"Web Spam: A Study of the Page Language Effect on the Spam Detection Features","authors":"A. Alarifi, Mansour Alsaleh","doi":"10.1109/ICMLA.2012.229","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.229","url":null,"abstract":"Although search engines have deployed various techniques to detect and filter out Web spam, Web stammers continue to develop new tactics to influence the result of search engines ranking algorithms, for the purpose of obtaining an undeservedly high ranks. In this paper, we study the effect of the page language on the spam detection features. We examine how the distribution of a set of selected detection features changes according to the page language. Also, we study the effect of the page language on the detection rate of a given classifier using a selected set of detection features. The analysis results show that selecting suitable features for a classifier that segregates spam pages depends heavily on the language of the examined Web page, due in part to the different set of Web spam mechanisms used by each type of stammers.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130878514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a method which is suitable for the estimation of the probability of occurrence of a syndrome, as a function of the geographical coordinates of the individuals under risk. The data describing the location of syndrome cases over the population suffers a moving-average filtering, and the resulting values are fitted by an RBF network performing a regression. Some contour curves of the RBF network are then employed in order to establish the boundaries between four kinds of regions: regions of high-incidence, regions of medium incidence, regions of slightly-abnormal incidence, and regions of normal prevalence. In each region, the risk is estimated with three indicators: a nominal risk, an upper bound risk and a lower bound risk. Those indicators are obtained by adjusting the probability employed for the Monte Carlo simulation of syndrome scenarios over the population. The nominal risk is the probability which produces Monte Carlo simulations for which the empirical number of syndrome cases corresponds to the median. The upper bound and the lower bound risks are the probabilities which produce Monte Carlo simulations for which the empirical values of syndrome cases correspond respectively to the 25% percentile and the 75% percentile. The proposed method constitutes an advance in relation to the currently known techniques of spatial cluster detection, which are dedicated to finding clusters of abnormal occurrence of a syndrome, without quantifying the probability associated to such an abnormality, and without performing a stratification of different sub-regions with different associated risks. The proposed method was applied on data which were studied formerly in a paper that was intended to find a cluster of dengue fever. The result determined here is compatible with the cluster that was found in that reference.
{"title":"Risk Estimation in Spatial Disease Clusters: An RBF Network Approach","authors":"Fernanda C. Takahashi, Ricardo H. C. Takahashi","doi":"10.1109/ICMLA.2012.233","DOIUrl":"https://doi.org/10.1109/ICMLA.2012.233","url":null,"abstract":"This paper proposes a method which is suitable for the estimation of the probability of occurrence of a syndrome, as a function of the geographical coordinates of the individuals under risk. The data describing the location of syndrome cases over the population suffers a moving-average filtering, and the resulting values are fitted by an RBF network performing a regression. Some contour curves of the RBF network are then employed in order to establish the boundaries between four kinds of regions: regions of high-incidence, regions of medium incidence, regions of slightly-abnormal incidence, and regions of normal prevalence. In each region, the risk is estimated with three indicators: a nominal risk, an upper bound risk and a lower bound risk. Those indicators are obtained by adjusting the probability employed for the Monte Carlo simulation of syndrome scenarios over the population. The nominal risk is the probability which produces Monte Carlo simulations for which the empirical number of syndrome cases corresponds to the median. The upper bound and the lower bound risks are the probabilities which produce Monte Carlo simulations for which the empirical values of syndrome cases correspond respectively to the 25% percentile and the 75% percentile. The proposed method constitutes an advance in relation to the currently known techniques of spatial cluster detection, which are dedicated to finding clusters of abnormal occurrence of a syndrome, without quantifying the probability associated to such an abnormality, and without performing a stratification of different sub-regions with different associated risks. The proposed method was applied on data which were studied formerly in a paper that was intended to find a cluster of dengue fever. The result determined here is compatible with the cluster that was found in that reference.","PeriodicalId":157399,"journal":{"name":"2012 11th International Conference on Machine Learning and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122027312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}