Continuous Integration (CI) has become a good practice of software development in recent years. As an essential part of CI, build creates software from source code. Predicting build outcome help developers to review and fix bugs before building to save time. However, we are missing objective evidence of practical factors affecting build result. Travis CI provides a hosted, distributed continuous integration service used to build and test software projects hosted at GitHub. The TravisTorrent is a dataset which deeply analyzes source code, process and dependency status of projects hosting on Travis CI. We use this dataset to investigate which factors may impact a build result. We first preprocess TravisTorrent data to extract 27 features. We then analyze the correlation between these features and the result of a build. Finally, we build four prediction models to predict the result of a build and perform a horizontal analysis. We found that in our study, the number of commits in a build (git_num_all_built_commits) is the most import factor that has significant impact on the build result, and SVM performs best in the four of the prediction models we used.
{"title":"What are the Factors Impacting Build Breakage?","authors":"Yang Luo, Yangyang Zhao, Wanwangying Ma, Lin Chen","doi":"10.1109/WISA.2017.17","DOIUrl":"https://doi.org/10.1109/WISA.2017.17","url":null,"abstract":"Continuous Integration (CI) has become a good practice of software development in recent years. As an essential part of CI, build creates software from source code. Predicting build outcome help developers to review and fix bugs before building to save time. However, we are missing objective evidence of practical factors affecting build result. Travis CI provides a hosted, distributed continuous integration service used to build and test software projects hosted at GitHub. The TravisTorrent is a dataset which deeply analyzes source code, process and dependency status of projects hosting on Travis CI. We use this dataset to investigate which factors may impact a build result. We first preprocess TravisTorrent data to extract 27 features. We then analyze the correlation between these features and the result of a build. Finally, we build four prediction models to predict the result of a build and perform a horizontal analysis. We found that in our study, the number of commits in a build (git_num_all_built_commits) is the most import factor that has significant impact on the build result, and SVM performs best in the four of the prediction models we used.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122657061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Global Positioning System(GPS) is used to find a specific point on the real earth although GPS positioning technology is becoming more and more mature, GPS always exists with equipment inherent errors or measurement methods errors. so map matching step is a very important preprocessing for lots of applications, such as traffic flow control, taxi mileage calculation, and finding some people. However, many current methods only deal with distance variables and do not handle angle variables between two segments. In this paper, we propose a new road network map matching algorithm, considering not only the distance between two sample points but also taking into account the angle between two candidate segments using the Hidden Markov Model (HMM) which is a popular solution for map matching. Subsequently, to solve the HMM problem, we make use of dynamic programming Viterbi algorithm to find the maximum probability road segments. The experiments are implemented on BEIJING CITY map real dataset and display that our map matching algorithm significantly improve the accuracy compared with ST-Matching global algorithm.
{"title":"Online Map Matching Algorithm Using Segment Angle Based on Hidden Markov Model","authors":"Jie Xu, Na Ta, Chunxiao Xing, Yong Zhang","doi":"10.1109/WISA.2017.19","DOIUrl":"https://doi.org/10.1109/WISA.2017.19","url":null,"abstract":"The Global Positioning System(GPS) is used to find a specific point on the real earth although GPS positioning technology is becoming more and more mature, GPS always exists with equipment inherent errors or measurement methods errors. so map matching step is a very important preprocessing for lots of applications, such as traffic flow control, taxi mileage calculation, and finding some people. However, many current methods only deal with distance variables and do not handle angle variables between two segments. In this paper, we propose a new road network map matching algorithm, considering not only the distance between two sample points but also taking into account the angle between two candidate segments using the Hidden Markov Model (HMM) which is a popular solution for map matching. Subsequently, to solve the HMM problem, we make use of dynamic programming Viterbi algorithm to find the maximum probability road segments. The experiments are implemented on BEIJING CITY map real dataset and display that our map matching algorithm significantly improve the accuracy compared with ST-Matching global algorithm.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124881790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huan Liao, Yukun Li, Gang Hao, Dexin Zhao, Yongxuan Lai, Weiwei Wang
Entity search is a new search pattern that return related entities to users rather than amounts of web pages containing mass and messy information. It is also a challenging research topic because it is difficult to understand the meaning of users' input and identify the entities from the messy web pages. In this paper, we propose an entity search pattern based on online encyclopedias and define it as MMK search(Multi-modifier Search), which means the input text by people only includes one kernel concept and multiple modifiers. We propose a solution framework to solve this kind of search, and propose a method to identify expected entities based on well-utilized online encyclopedias. To evaluate the methods, we create an experimental data set and a baseline under the help of participants, the results verified the effectiveness of our methods.
{"title":"A Domain-Independent Multi-modifier Entity Search Method","authors":"Huan Liao, Yukun Li, Gang Hao, Dexin Zhao, Yongxuan Lai, Weiwei Wang","doi":"10.1109/WISA.2017.41","DOIUrl":"https://doi.org/10.1109/WISA.2017.41","url":null,"abstract":"Entity search is a new search pattern that return related entities to users rather than amounts of web pages containing mass and messy information. It is also a challenging research topic because it is difficult to understand the meaning of users' input and identify the entities from the messy web pages. In this paper, we propose an entity search pattern based on online encyclopedias and define it as MMK search(Multi-modifier Search), which means the input text by people only includes one kernel concept and multiple modifiers. We propose a solution framework to solve this kind of search, and propose a method to identify expected entities based on well-utilized online encyclopedias. To evaluate the methods, we create an experimental data set and a baseline under the help of participants, the results verified the effectiveness of our methods.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124712708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finding relevant and high-quality data is the eternal needs for data consumers (i.e., users). Many open data portals have been providing users with simple ways of finding datasets on a particular topic (i.e., topical datasets), which are not a way of filtering and ranking topical datasets based on data quality. Despite the recent advances in the development and standardization of data quality models and vocabulary, there is a lack of systematic research on approaches and tools for user-driven data quality-based filtering and ranking of topical datasets. In this paper we address the problem of user-driven filtering and ranking of topical datasets based on the overall data quality of datasets by developing a generic software architecture and the corresponding approach, called ODQFiRD, for filtering and ranking topical datasets according to user-specified data quality assessment criteria. Additionally, we use our implemented prototype of ODQFiRD to conduct a case study experiment on the U.S. Government's open data portal. The prototype implementation and experimental results show that our proposed ODQFiRD is achievable and effective.
{"title":"User-Driven Filtering and Ranking of Topical Datasets Based on Overall Data Quality","authors":"Wenze Xia, Zhuoming Xu, Chengwang Mao","doi":"10.1109/WISA.2017.24","DOIUrl":"https://doi.org/10.1109/WISA.2017.24","url":null,"abstract":"Finding relevant and high-quality data is the eternal needs for data consumers (i.e., users). Many open data portals have been providing users with simple ways of finding datasets on a particular topic (i.e., topical datasets), which are not a way of filtering and ranking topical datasets based on data quality. Despite the recent advances in the development and standardization of data quality models and vocabulary, there is a lack of systematic research on approaches and tools for user-driven data quality-based filtering and ranking of topical datasets. In this paper we address the problem of user-driven filtering and ranking of topical datasets based on the overall data quality of datasets by developing a generic software architecture and the corresponding approach, called ODQFiRD, for filtering and ranking topical datasets according to user-specified data quality assessment criteria. Additionally, we use our implemented prototype of ODQFiRD to conduct a case study experiment on the U.S. Government's open data portal. The prototype implementation and experimental results show that our proposed ODQFiRD is achievable and effective.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126496391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the process of the information openness and the development of Internet technology, judicial data begin to enter the public view, and the carrier of that is the referee document, as referee document almost reflect all information of cases. Everyone is a social media content producer and consumer, which on behalf of public's opinion about the law, and it has made the legal significance not only limited to the professional field, but also includes social cognitive meanings. So, Digging into the relationship between professional legal meaning and social cognition has become an important issue. We use the knowledge graph to construct the relationship network between social media and law entities of professional legal data, and introduce the related methods of knowledge graph.
{"title":"Knowledge Graph Construction Based on Judicial Data with Social Media","authors":"Hao Lian, Zemin Qin, Tieke He, B. Luo","doi":"10.1109/WISA.2017.46","DOIUrl":"https://doi.org/10.1109/WISA.2017.46","url":null,"abstract":"With the process of the information openness and the development of Internet technology, judicial data begin to enter the public view, and the carrier of that is the referee document, as referee document almost reflect all information of cases. Everyone is a social media content producer and consumer, which on behalf of public's opinion about the law, and it has made the legal significance not only limited to the professional field, but also includes social cognitive meanings. So, Digging into the relationship between professional legal meaning and social cognition has become an important issue. We use the knowledge graph to construct the relationship network between social media and law entities of professional legal data, and introduce the related methods of knowledge graph.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123042456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Topic classification is a foundational task in many NLP applications. Traditional topic classifiers often rely on many humandesigned features, while word embedding and convolutional neural network based on deep learning are introduced to realize topic classification in recent years. In this paper, the influence of different word embedding for CNN classifiers is studied, and an improved word embedding named HybridWordVec is proposed, which is a combination of word2vec and topic distribution vector. Experiment on Chinese corpus Fudan set and English corpus 20Newsgroups is conducted. The experiment turns out that CNN with HybridWordVec gains an accuracy of 91.82% for Chinese corpus and 95.67% for English corpus, which suggests HybridWordVec can obviously improve the classification accuracy comparing with other word embedding models like word2vec and GloVe.
{"title":"Topic Classification Based on Improved Word Embedding","authors":"Liangliang Sheng, Lizhen Xu","doi":"10.1109/WISA.2017.44","DOIUrl":"https://doi.org/10.1109/WISA.2017.44","url":null,"abstract":"Topic classification is a foundational task in many NLP applications. Traditional topic classifiers often rely on many humandesigned features, while word embedding and convolutional neural network based on deep learning are introduced to realize topic classification in recent years. In this paper, the influence of different word embedding for CNN classifiers is studied, and an improved word embedding named HybridWordVec is proposed, which is a combination of word2vec and topic distribution vector. Experiment on Chinese corpus Fudan set and English corpus 20Newsgroups is conducted. The experiment turns out that CNN with HybridWordVec gains an accuracy of 91.82% for Chinese corpus and 95.67% for English corpus, which suggests HybridWordVec can obviously improve the classification accuracy comparing with other word embedding models like word2vec and GloVe.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115110407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sentiment tendency analysis based on the news reports aims to discover the audience's attitude towards the hot event, which is an important research content of the emotion analysis. In the context of China's going-out strategy, we can effectively avoid the potential risks in the help of the in-depth study and interpretation of China's relevant policy in a certain country and region and the understanding of the local public opinion and the conditions of the people. As the official languages of those countries along the Belt and Road are mostly uncommon languages, we can not use the existing mature tools which use Chinese and English as the research object, so the sentiment tendency analysis to the uncommon languages is a very challenging task. On the basis of completing the basic work of Indonesian sentiment dictionary, degree adverb dictionary, negative word dictionary and stop word dictionary, this paper puts forward a set of calculating methods aiming at the sentiment tendencies in Indonesian language, which is applied to the calculation of the sentiment tendencies related to THAAD event in three major mainstream media in Indonesia, and the results of the calculation will be interpreted and analyzed.
{"title":"Sentiment Tendency Analysis of THAAD Event in Indonesian News","authors":"Ye Liang, Bing Fu, Zongchun Li","doi":"10.1109/WISA.2017.48","DOIUrl":"https://doi.org/10.1109/WISA.2017.48","url":null,"abstract":"The sentiment tendency analysis based on the news reports aims to discover the audience's attitude towards the hot event, which is an important research content of the emotion analysis. In the context of China's going-out strategy, we can effectively avoid the potential risks in the help of the in-depth study and interpretation of China's relevant policy in a certain country and region and the understanding of the local public opinion and the conditions of the people. As the official languages of those countries along the Belt and Road are mostly uncommon languages, we can not use the existing mature tools which use Chinese and English as the research object, so the sentiment tendency analysis to the uncommon languages is a very challenging task. On the basis of completing the basic work of Indonesian sentiment dictionary, degree adverb dictionary, negative word dictionary and stop word dictionary, this paper puts forward a set of calculating methods aiming at the sentiment tendencies in Indonesian language, which is applied to the calculation of the sentiment tendencies related to THAAD event in three major mainstream media in Indonesia, and the results of the calculation will be interpreted and analyzed.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116148101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we propose efficient query workload partition techniques to reduce processing times of queries in parallel search engines. Existing methods cannot offer both high cache hit ratios and caching-aware load balance of the system. Aiming to solve this problem, we propose effective solutions to capture tradeoff between the cache hit ratio and load balance to reduce the total query processing time. The performance of the proposed algorithms are demonstrated by extensive experiments on real datasets, and the experimental results demonstrate that our algorithms have an efficiency improvement of up to at least 30% compared to extending current methods such as the roundrobin based algorithm and so on.
{"title":"Caching-Aware Techniques for Query Workload Partitioning in Parallel Search Engines","authors":"Chuanfei Xu, Yanqiu Wang, Pin Lv, Jia Xu","doi":"10.1109/WISA.2017.33","DOIUrl":"https://doi.org/10.1109/WISA.2017.33","url":null,"abstract":"In this work, we propose efficient query workload partition techniques to reduce processing times of queries in parallel search engines. Existing methods cannot offer both high cache hit ratios and caching-aware load balance of the system. Aiming to solve this problem, we propose effective solutions to capture tradeoff between the cache hit ratio and load balance to reduce the total query processing time. The performance of the proposed algorithms are demonstrated by extensive experiments on real datasets, and the experimental results demonstrate that our algorithms have an efficiency improvement of up to at least 30% compared to extending current methods such as the roundrobin based algorithm and so on.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127359365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The version of an Android application (app) is updated frequently and rewriting test scripts for each version update is laborious and expensive, so reusing existing test scripts is a better choice. Although the app's business logic is relatively stable during the process of app version evolution, user interface (UI) control changes in the new version tend to cause the original test scripts to fail, which is the main problem in test script reuse. In this paper we address this problem by developing an XPath-based approach to reusing test scripts for Android apps in the case of changes in the locations, names, or property values of UI controls in the app. In our approach, the test scripts use XPath expressions to locate the UI controls. The approach first identifies failed test scripts and no longer valid XPath expressions by executing the original test scripts on the new version of the app. Next, it uses the invalid XPath expressions to find the difference between the two DOMs corresponding to a view in the changed page in the new version and a view in the original page in the previous version, respectively. Finally, it uses the DOM difference to repair the XPath expressions, thereby achieving the reuse of test scripts. We have implemented a prototype of the approach based on Robotium and used it to conduct experiments on two real-world Android apps. The results show that our approach can achieve a higher script reuse percent than Robotium.
{"title":"An XPath-Based Approach to Reusing Test Scripts for Android Applications","authors":"Fei Song, Zhuoming Xu, F. Xu","doi":"10.1109/WISA.2017.49","DOIUrl":"https://doi.org/10.1109/WISA.2017.49","url":null,"abstract":"The version of an Android application (app) is updated frequently and rewriting test scripts for each version update is laborious and expensive, so reusing existing test scripts is a better choice. Although the app's business logic is relatively stable during the process of app version evolution, user interface (UI) control changes in the new version tend to cause the original test scripts to fail, which is the main problem in test script reuse. In this paper we address this problem by developing an XPath-based approach to reusing test scripts for Android apps in the case of changes in the locations, names, or property values of UI controls in the app. In our approach, the test scripts use XPath expressions to locate the UI controls. The approach first identifies failed test scripts and no longer valid XPath expressions by executing the original test scripts on the new version of the app. Next, it uses the invalid XPath expressions to find the difference between the two DOMs corresponding to a view in the changed page in the new version and a view in the original page in the previous version, respectively. Finally, it uses the DOM difference to repair the XPath expressions, thereby achieving the reuse of test scripts. We have implemented a prototype of the approach based on Robotium and used it to conduct experiments on two real-world Android apps. The results show that our approach can achieve a higher script reuse percent than Robotium.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125817246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenzhe Liao, Qian Wang, Luqun Yang, Jiadong Ren, D. Davis, Changzhen Hu
Frequent intra-sequence pattern mining and inter-sequence pattern mining are both important ways of association rule mining for different applications. However, most algorithms focus on just one of them, as attempting both is usually inefficient. To address this deficiency, FIIP-BM, a Frequent Intra-sequence and Inter-sequence Pattern mining algorithm using Bitmap with a maxSpan is proposed. FIIP-BM transforms each transaction to a bit vector, adjusts the maximal span according to user's demand and obtains the frequent sequences by logic And-operation. For candidate 2-pattern generation, the subscripts of the joining items should be checked first; the bit vector of the joining item will be left-shifted before calculation if the subscript is not 0. Left alignment rule is used for different bit vector length problems. FIIP-BM can mine both intra-sequence and inter-sequence patterns. Experiments are conducted to demonstrate the computational speed and memory efficiency of the FIIP-BM algorithm.
{"title":"Mining Frequent Intra-Sequence and Inter-Sequence Patterns Using Bitmap with a Maximal Span","authors":"Wenzhe Liao, Qian Wang, Luqun Yang, Jiadong Ren, D. Davis, Changzhen Hu","doi":"10.1109/WISA.2017.70","DOIUrl":"https://doi.org/10.1109/WISA.2017.70","url":null,"abstract":"Frequent intra-sequence pattern mining and inter-sequence pattern mining are both important ways of association rule mining for different applications. However, most algorithms focus on just one of them, as attempting both is usually inefficient. To address this deficiency, FIIP-BM, a Frequent Intra-sequence and Inter-sequence Pattern mining algorithm using Bitmap with a maxSpan is proposed. FIIP-BM transforms each transaction to a bit vector, adjusts the maximal span according to user's demand and obtains the frequent sequences by logic And-operation. For candidate 2-pattern generation, the subscripts of the joining items should be checked first; the bit vector of the joining item will be left-shifted before calculation if the subscript is not 0. Left alignment rule is used for different bit vector length problems. FIIP-BM can mine both intra-sequence and inter-sequence patterns. Experiments are conducted to demonstrate the computational speed and memory efficiency of the FIIP-BM algorithm.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131814941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}