H. Bhat, Li-Hsuan Huang, Sebastian Rodriguez, Rick Dale, E. Heit
Using a large database of nearly 8 million bibliographic entries spanning over 3 million unique authors, we build predictive models to classify a paper based on its citation count. Our approach involves considering a diverse array of features including the interdisciplinarity of authors, which we quantify using Shannon entropy and Jensen-Shannon divergence. Rather than rely on subject codes, we model the disciplinary preferences of each author by estimating the author's journal distribution. We conduct an exploratory data analysis on the relationship between these interdisciplinarity variables and citation counts. In addition, we model the effects of (1) each author's influence in coauthorship graphs, and (2) words in the title of the paper. We then build classifiers for two-and three-class classification problems that correspond to predicting the interval in which a paper's citation count will lie. We use cross-validation and a true test set to tune model parameters and assess model performance. The best model we build, a classification tree, yields test set accuracies of 0.87 and 0.66, respectively. Using this model, we also provide rankings of attribute importance, for the three-class problem, these rankings indicate the importance of our interdisciplinarity metrics in predicting citation counts.
{"title":"Citation Prediction Using Diverse Features","authors":"H. Bhat, Li-Hsuan Huang, Sebastian Rodriguez, Rick Dale, E. Heit","doi":"10.1109/ICDMW.2015.131","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.131","url":null,"abstract":"Using a large database of nearly 8 million bibliographic entries spanning over 3 million unique authors, we build predictive models to classify a paper based on its citation count. Our approach involves considering a diverse array of features including the interdisciplinarity of authors, which we quantify using Shannon entropy and Jensen-Shannon divergence. Rather than rely on subject codes, we model the disciplinary preferences of each author by estimating the author's journal distribution. We conduct an exploratory data analysis on the relationship between these interdisciplinarity variables and citation counts. In addition, we model the effects of (1) each author's influence in coauthorship graphs, and (2) words in the title of the paper. We then build classifiers for two-and three-class classification problems that correspond to predicting the interval in which a paper's citation count will lie. We use cross-validation and a true test set to tune model parameters and assess model performance. The best model we build, a classification tree, yields test set accuracies of 0.87 and 0.66, respectively. Using this model, we also provide rankings of attribute importance, for the three-class problem, these rankings indicate the importance of our interdisciplinarity metrics in predicting citation counts.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129240800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have created and are proposing the KOTO-FRAME previously called dynamic quality function deployment (DQFD) technique, which evolved from quality function deployment (QFD). This method was applied to aseismatic mechanisms and is recognized as tacit knowledge for creating a logical structure from the architecture of a five-story pagoda by experimenting a model of the steric balancing toy principle. Consequently, without complex calculations, we were able to define the corresponding data structure in the attribution table of experiment or evaluation, which is worth applying not only to transact past data but also future data via experiment or evaluation utilizing the idea of "market of data."
{"title":"Application of Applied KOTO-FRAME to the Five-Story Pagoda Aseismatic Mechanism","authors":"Masahiko Teramoto, Jun Nakamura","doi":"10.1109/ICDMW.2015.173","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.173","url":null,"abstract":"We have created and are proposing the KOTO-FRAME previously called dynamic quality function deployment (DQFD) technique, which evolved from quality function deployment (QFD). This method was applied to aseismatic mechanisms and is recognized as tacit knowledge for creating a logical structure from the architecture of a five-story pagoda by experimenting a model of the steric balancing toy principle. Consequently, without complex calculations, we were able to define the corresponding data structure in the attribution table of experiment or evaluation, which is worth applying not only to transact past data but also future data via experiment or evaluation utilizing the idea of \"market of data.\"","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121448717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collective inference models have recently been used to significantly improve the predictive accuracy of node classifications in network domains. However, these methods have generally assumed a fully labeled network is available for learning. There has been relatively little work on transfer learning methods for collective classification, i.e., to exploit labeled data in one network domain to learn a collective classification model to apply in another network. While there has been some work on transfer learning for link prediction and node classification, the proposed methods focus on developing algorithms to adapt the models without a deep understanding of how the network structure impacts transferability. Here we make the key observation that collective classification models are generally composed of local model templates that are rolled out across a heterogeneous network to construct a larger model for inference. Thus, the transferability of a model could depend on similarity of the local model templates and/or the global structure of the data networks. In this work, we study the performance of basic relational models when learned on one network and transferred to another network to apply collective inference. We show, using both synthetic and real data experiments, that transferability of models depends on both the graph structure and local model parameters. Moreover, we show that a probability calibration process (that removes bias due to propagation errors in collective inference) improves transferability.
{"title":"Analyzing the Transferability of Collective Inference Models Across Networks","authors":"Ransen Niu, Sebastián Moreno, Jennifer Neville","doi":"10.1109/ICDMW.2015.192","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.192","url":null,"abstract":"Collective inference models have recently been used to significantly improve the predictive accuracy of node classifications in network domains. However, these methods have generally assumed a fully labeled network is available for learning. There has been relatively little work on transfer learning methods for collective classification, i.e., to exploit labeled data in one network domain to learn a collective classification model to apply in another network. While there has been some work on transfer learning for link prediction and node classification, the proposed methods focus on developing algorithms to adapt the models without a deep understanding of how the network structure impacts transferability. Here we make the key observation that collective classification models are generally composed of local model templates that are rolled out across a heterogeneous network to construct a larger model for inference. Thus, the transferability of a model could depend on similarity of the local model templates and/or the global structure of the data networks. In this work, we study the performance of basic relational models when learned on one network and transferred to another network to apply collective inference. We show, using both synthetic and real data experiments, that transferability of models depends on both the graph structure and local model parameters. Moreover, we show that a probability calibration process (that removes bias due to propagation errors in collective inference) improves transferability.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"23 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126320697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting terrorist related content on social media is a problem for law enforcement agency due to the large amount of information that is available. This work is aiming at detecting tweeps that are involved in media mujahideen - the supporters of jihadist groups who disseminate propaganda content online. To do this we use a machine learning approach where we make use of two sets of features: data dependent features and data independent features. The data dependent features are features that are heavily influenced by the specific dataset while the data independent features are independent of the dataset and can be used on other datasets with similar result. By using this approach we hope that our method can be used as a baseline to classify violent extremist content from different kind of sources since data dependent features from various domains can be added. In our experiments we have used the AdaBoost classifier. The results shows that our approach works very well for classifying English tweeps and English tweets but the approach does not perform as well on Arabic data.
{"title":"Detecting Multipliers of Jihadism on Twitter","authors":"Lisa Kaati, Enghin Omer, Nico Prucha, A. Shrestha","doi":"10.1109/ICDMW.2015.9","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.9","url":null,"abstract":"Detecting terrorist related content on social media is a problem for law enforcement agency due to the large amount of information that is available. This work is aiming at detecting tweeps that are involved in media mujahideen - the supporters of jihadist groups who disseminate propaganda content online. To do this we use a machine learning approach where we make use of two sets of features: data dependent features and data independent features. The data dependent features are features that are heavily influenced by the specific dataset while the data independent features are independent of the dataset and can be used on other datasets with similar result. By using this approach we hope that our method can be used as a baseline to classify violent extremist content from different kind of sources since data dependent features from various domains can be added. In our experiments we have used the AdaBoost classifier. The results shows that our approach works very well for classifying English tweeps and English tweets but the approach does not perform as well on Arabic data.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116081070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors introduce a growth model of the circulation of a stock and flow process that makes use of stock as an important factor in designing a data marketplace. The model highlights the necessity of considering how learning efficiency must be designed for the purpose of concept design. The model is applied to a business case in commercial industry to discuss the essentials of stock and learning efficiency with the aim of designing a data marketplace.
{"title":"Knowledge-Based Circulation Growth Model: Applying a Data Marketplace to Concept Design","authors":"Jun Nakamura, Masahiko Teramoto","doi":"10.1109/ICDMW.2015.81","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.81","url":null,"abstract":"The authors introduce a growth model of the circulation of a stock and flow process that makes use of stock as an important factor in designing a data marketplace. The model highlights the necessity of considering how learning efficiency must be designed for the purpose of concept design. The model is applied to a business case in commercial industry to discuss the essentials of stock and learning efficiency with the aim of designing a data marketplace.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121224155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work investigates the role of contrasting discourse relations signaled by cue phrases, together with phrase positional information, in predicting sentiment at the phrase level. Two domains of online reviews were chosen. The first domain is of nutritional supplement reviews, which are often poorly structured yet also allow certain simplifying assumptions to be made. The second domain is of hotel reviews, which have somewhat different characteristics. A corpus is built from these reviews, and manually tagged for polarity. We propose and evaluate a few new features that are realized through a lightweight method of discourse analysis, and use these features in a hybrid lexicon and machine learning based classifier. Our results show that these features may be used to obtain an improvement in classification accuracy compared to other traditional machine learning approaches.
{"title":"Sentiment Polarity Classification Using Structural Features","authors":"D. Ansari","doi":"10.1109/ICDMW.2015.57","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.57","url":null,"abstract":"This work investigates the role of contrasting discourse relations signaled by cue phrases, together with phrase positional information, in predicting sentiment at the phrase level. Two domains of online reviews were chosen. The first domain is of nutritional supplement reviews, which are often poorly structured yet also allow certain simplifying assumptions to be made. The second domain is of hotel reviews, which have somewhat different characteristics. A corpus is built from these reviews, and manually tagged for polarity. We propose and evaluate a few new features that are realized through a lightweight method of discourse analysis, and use these features in a hybrid lexicon and machine learning based classifier. Our results show that these features may be used to obtain an improvement in classification accuracy compared to other traditional machine learning approaches.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114301458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rising accessibility and popularity of gambling products has increased interest in the effects of gambling. Nonetheless, research of gambling measures is scarce. This paper presents the application of data mining techniques, on 46,514 gambling sessions, to distinguish types of gambling and identify potential instances of problem gambling in EGMs. Gambling sessions included measures of gambling involvement, out-of-pocket expense, winnings and cost of gambling. In this first exploratory study, sessions were clustered into four clusters, as a stability test determined four clusters to be the most high-quality yielding and stable solution within our clustering criteria. Based on the expressed gambling behavior within these sessions, our k-means cluster analysis results indicated sessions were classified as potential non-problem gambling sessions, potential low risk gambling sessions, potential moderate risk gambling sessions, and potential problem gambling sessions. While the complexity of EGM data prevents researchers from recognizing the incidence of problem gambling in a specific individual, our methods suggest that the lack of player identification does not prevent one from identifying the incidence of problem gambling behavior.
{"title":"Identifying Behavioral Characteristics in EGM Gambling Data Using Session Clustering","authors":"Maria Gabriella Mosquera, Vlado Keelj","doi":"10.1109/ICDMW.2015.211","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.211","url":null,"abstract":"The rising accessibility and popularity of gambling products has increased interest in the effects of gambling. Nonetheless, research of gambling measures is scarce. This paper presents the application of data mining techniques, on 46,514 gambling sessions, to distinguish types of gambling and identify potential instances of problem gambling in EGMs. Gambling sessions included measures of gambling involvement, out-of-pocket expense, winnings and cost of gambling. In this first exploratory study, sessions were clustered into four clusters, as a stability test determined four clusters to be the most high-quality yielding and stable solution within our clustering criteria. Based on the expressed gambling behavior within these sessions, our k-means cluster analysis results indicated sessions were classified as potential non-problem gambling sessions, potential low risk gambling sessions, potential moderate risk gambling sessions, and potential problem gambling sessions. While the complexity of EGM data prevents researchers from recognizing the incidence of problem gambling in a specific individual, our methods suggest that the lack of player identification does not prevent one from identifying the incidence of problem gambling behavior.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131406409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lars Ropeid Selsaas, B. Agrawal, Chunming Rong, T. Wiktorski
User identification and prediction is one typical problem with the cross-device connection. User identification is useful for the recommendation engine, online advertising, and user experiences. Extreme sparse and large-scale data make user identification a challenging problem. To achieve better performance and accuracy for identification a better model with short turnaround time, and able to handle extremely sparse and large-scale data is the key. In this paper, we proposed a novel efficient machine learning approach to deal with such problem. We have adapted Field-aware Factorization Machine's approach using auto feature engineering techniques. Our model has the capacity to handle multiple features within the same field. The model provides an efficient way to handle the fields in the matrix. It counts the unique fields in the matrix and divides both the matrix with that value, which provide an efficient and scalable technique in term of time complexity. The accuracy of the model is 0.864845, when tested with Drawbridge datasets released in the context of the ICDM 2015 Cross-Device Connections Challenge.
{"title":"AFFM: Auto feature engineering in field-aware factorization machines for predictive analytics","authors":"Lars Ropeid Selsaas, B. Agrawal, Chunming Rong, T. Wiktorski","doi":"10.1109/ICDMW.2015.245","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.245","url":null,"abstract":"User identification and prediction is one typical problem with the cross-device connection. User identification is useful for the recommendation engine, online advertising, and user experiences. Extreme sparse and large-scale data make user identification a challenging problem. To achieve better performance and accuracy for identification a better model with short turnaround time, and able to handle extremely sparse and large-scale data is the key. In this paper, we proposed a novel efficient machine learning approach to deal with such problem. We have adapted Field-aware Factorization Machine's approach using auto feature engineering techniques. Our model has the capacity to handle multiple features within the same field. The model provides an efficient way to handle the fields in the matrix. It counts the unique fields in the matrix and divides both the matrix with that value, which provide an efficient and scalable technique in term of time complexity. The accuracy of the model is 0.864845, when tested with Drawbridge datasets released in the context of the ICDM 2015 Cross-Device Connections Challenge.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116532658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the ever increasing amount of medical image scans, it is critical to have an extensible framework that allows for mining such unstructured data. Such a framework would provide a medical researcher the flexibility in validating and testing hypotheses. Important characteristics of this type of framework include accuracy, efficiency and extensibility. The objective of this work is to build an initial implementation of such a framework within a big data paradigm. To this end, a clinical data warehouse was built for the structured data and a set of modules were created to analyze the unstructured content. The framework contains built-in modules but is flexible in allowing the user to import their own, making it extensible. Furthermore, the framework runs the modules in a Hadoop cluster making it efficient by utilizing the distributed computing capability of big data approach. To test the framework, simulated data of 1,000 patients along with their hippocampi images were created. The results show that the framework accurately returned all 15 patients who had hippocampal resection with hippocampus ipsilateral to surgery being less than 20% the size of the hippocampus contralateral to surgery, using a built-in module. In addition, the framework allowed the user to run a different module using the previous output to further analyze the unstructured data. Finally, the framework also enabled the user to import a new module. This study paves the way towards showing the feasibility of such a framework to handle unstructured medical data in an accurate, efficient and extensible manner.
{"title":"Extensible Query Framework for Unstructured Medical Data -- A Big Data Approach","authors":"Sarmad Istephan, Mohammad-Reza Siadat","doi":"10.1109/ICDMW.2015.67","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.67","url":null,"abstract":"With the ever increasing amount of medical image scans, it is critical to have an extensible framework that allows for mining such unstructured data. Such a framework would provide a medical researcher the flexibility in validating and testing hypotheses. Important characteristics of this type of framework include accuracy, efficiency and extensibility. The objective of this work is to build an initial implementation of such a framework within a big data paradigm. To this end, a clinical data warehouse was built for the structured data and a set of modules were created to analyze the unstructured content. The framework contains built-in modules but is flexible in allowing the user to import their own, making it extensible. Furthermore, the framework runs the modules in a Hadoop cluster making it efficient by utilizing the distributed computing capability of big data approach. To test the framework, simulated data of 1,000 patients along with their hippocampi images were created. The results show that the framework accurately returned all 15 patients who had hippocampal resection with hippocampus ipsilateral to surgery being less than 20% the size of the hippocampus contralateral to surgery, using a built-in module. In addition, the framework allowed the user to run a different module using the previous output to further analyze the unstructured data. Finally, the framework also enabled the user to import a new module. This study paves the way towards showing the feasibility of such a framework to handle unstructured medical data in an accurate, efficient and extensible manner.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115134410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses methods to identify individual users across their digital devices as part of the ICDM 2015 competition hosted on Kaggle. The competition's data set and prize pool were provided by http://www.drawbrid.ge/ in sponsorship with the ICDM 2015 conference. The methods described in this paper focuses on feature engineering and generic machine learning algorithms like Extreme Gradient Boosting (xgboost), Follow the Reguralized Leader Proximal etc. Machine learning algorithms discussed in this paper can help improve the marketer's ability to identify individual users as they switch between devices and show relevant content/recommendation to users wherever they go.
{"title":"Machine Learning Approach to Identify Users Across Their Digital Devices","authors":"Thakur Raj Anand, Oleksii Renov","doi":"10.1109/ICDMW.2015.243","DOIUrl":"https://doi.org/10.1109/ICDMW.2015.243","url":null,"abstract":"This paper discusses methods to identify individual users across their digital devices as part of the ICDM 2015 competition hosted on Kaggle. The competition's data set and prize pool were provided by http://www.drawbrid.ge/ in sponsorship with the ICDM 2015 conference. The methods described in this paper focuses on feature engineering and generic machine learning algorithms like Extreme Gradient Boosting (xgboost), Follow the Reguralized Leader Proximal etc. Machine learning algorithms discussed in this paper can help improve the marketer's ability to identify individual users as they switch between devices and show relevant content/recommendation to users wherever they go.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127805711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}