Yufan Yang, Yi Feng, Jidong Ge, Yemao Zhou, Jin Zeng, Chuanyi Li, B. Luo
With the continuous advancement of the informatization of the Chinese People's Court, the court's view on the extraction and application of information has not only been on the structured data, but also for the semi-structured and unstructured data. In the process of in-depth study of the judgment document, many cases require the collection of the document result as an important data dimension, and the key is that the statute is the core of the whole result, so the integrity and correctness of the statute obtained has played a key role for the process of the judgment document processing. However, in the process of writing a specific judgment document, the same statute has different string forms due to the diversity of writing, which leads directly to the error data source. Comparing the editing distance between the strings can judge the similarity of them to a certain extent. Therefore, an automatic method based on the editing distance algorithm is devised, which constructs the disparity model between different statutes strings, to obtain the standardized writing of the same type data. Using this method can remove the non-standard writing of statutes, and ultimately access to the standard statutes collection. This method has a higher efficiency than the method of enumerating all the writing circumstances, which needs the manual participation, additional data storage and update.
{"title":"Checking the Statutes in Chinese Judgment Document Based on Editing Distance Algorithm","authors":"Yufan Yang, Yi Feng, Jidong Ge, Yemao Zhou, Jin Zeng, Chuanyi Li, B. Luo","doi":"10.1109/WISA.2017.1","DOIUrl":"https://doi.org/10.1109/WISA.2017.1","url":null,"abstract":"With the continuous advancement of the informatization of the Chinese People's Court, the court's view on the extraction and application of information has not only been on the structured data, but also for the semi-structured and unstructured data. In the process of in-depth study of the judgment document, many cases require the collection of the document result as an important data dimension, and the key is that the statute is the core of the whole result, so the integrity and correctness of the statute obtained has played a key role for the process of the judgment document processing. However, in the process of writing a specific judgment document, the same statute has different string forms due to the diversity of writing, which leads directly to the error data source. Comparing the editing distance between the strings can judge the similarity of them to a certain extent. Therefore, an automatic method based on the editing distance algorithm is devised, which constructs the disparity model between different statutes strings, to obtain the standardized writing of the same type data. Using this method can remove the non-standard writing of statutes, and ultimately access to the standard statutes collection. This method has a higher efficiency than the method of enumerating all the writing circumstances, which needs the manual participation, additional data storage and update.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114400903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Zhuang, Yemao Zhou, Jidong Ge, Zhongjin Li, Chuanyi Li, Xiaoyu Zhou, B. Luo
Judgment documents contain a wealth of valuable information. The original judgment documents are written in pure text format, so we cannot obtain information directly, which hinders the study of the judgment documents. We propose an approach to parse Chinese judgment documents into structured documents to solve this problem. Divide a judgment document into logical segments, and then extract and label information items from these logical segments. Use information items to build analytic document information model and the model is output into a structured XML document.
{"title":"Information Extraction from Chinese Judgment Documents","authors":"C. Zhuang, Yemao Zhou, Jidong Ge, Zhongjin Li, Chuanyi Li, Xiaoyu Zhou, B. Luo","doi":"10.1109/WISA.2017.67","DOIUrl":"https://doi.org/10.1109/WISA.2017.67","url":null,"abstract":"Judgment documents contain a wealth of valuable information. The original judgment documents are written in pure text format, so we cannot obtain information directly, which hinders the study of the judgment documents. We propose an approach to parse Chinese judgment documents into structured documents to solve this problem. Divide a judgment document into logical segments, and then extract and label information items from these logical segments. Use information items to build analytic document information model and the model is output into a structured XML document.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114954354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingke Xu, Xuefa Xia, Huanliang Sun, Shoujing Wang, Ge Yu
The influence of spatial position means how deep it affects the spatial objects and can be measured by the number of affected spatial objects. The evaluation of spatial position influence which widely used in architectural planning and facility location is a typical study in the spatial database. In previous studies, a spatial object was supposed to affect only one spatial position, the influence of the object calculated by the number of the space objects in the area. However, the spatial object can affect many spatial positions and the effects are multiple. In this study, we provide a new evaluation model based on RkNN. A new measurement method was proposed by calculating the weight of the contribution based on the distance between the space object and the space position. The new measurement method makes the model more suitable for the practical application. In addition, a location algorithm was proposed based on the RkNN influence evaluation model. The algorithm can solve the problem such as making the facilities to provide the best service to the customers and using each facility effectively. The influence of each facility is calculated in this algorithm and the rationality of the location scheme is evaluated by equilibrium coefficient, the smaller the equilibrium coefficient, the more reasonable the scheme. The location algorithm based on the new model shows a better performance in the practical application, it contributes to the more reasonable and effective facility location.
{"title":"Research on Influence Evaluation Based on RkNN and Its Application in Location Problem","authors":"Jingke Xu, Xuefa Xia, Huanliang Sun, Shoujing Wang, Ge Yu","doi":"10.1109/WISA.2017.40","DOIUrl":"https://doi.org/10.1109/WISA.2017.40","url":null,"abstract":"The influence of spatial position means how deep it affects the spatial objects and can be measured by the number of affected spatial objects. The evaluation of spatial position influence which widely used in architectural planning and facility location is a typical study in the spatial database. In previous studies, a spatial object was supposed to affect only one spatial position, the influence of the object calculated by the number of the space objects in the area. However, the spatial object can affect many spatial positions and the effects are multiple. In this study, we provide a new evaluation model based on RkNN. A new measurement method was proposed by calculating the weight of the contribution based on the distance between the space object and the space position. The new measurement method makes the model more suitable for the practical application. In addition, a location algorithm was proposed based on the RkNN influence evaluation model. The algorithm can solve the problem such as making the facilities to provide the best service to the customers and using each facility effectively. The influence of each facility is calculated in this algorithm and the rationality of the location scheme is evaluated by equilibrium coefficient, the smaller the equilibrium coefficient, the more reasonable the scheme. The location algorithm based on the new model shows a better performance in the practical application, it contributes to the more reasonable and effective facility location.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129950204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of LBS bring great convenience to our lives, but also presents new challenges to privacy protection. Many of the existing methods are inadequate because in their schemes they assume that all users can be trusted which is not practical. So, the existing methods can not resist the query sampling attack and self-betrayal attacks. In addition to this, they also did not take the location semantic into account, so they are vulnerable to location homogeneity attacks. In order to solve the problems, we introduce the concept of USLD (user similar location diversity), we consider the scenario that part of the users are not trusted, and the users which we choose as candidates may in the locations which have same semantic. We consider some of users to be untrustworthy, propose the idea that users who have similar privacy settings with the real user are more plausible than others. We select users who are similar with the real user using Adjusted Cosine Similarity, and the Earth Mover Distance is used to calculate location semantics. Our method can well resist query sampling attacks, self-betrayal attacks, location homogeneity attacks. Experiments show that our method is very practical.
LBS的发展在给我们的生活带来极大便利的同时,也对隐私保护提出了新的挑战。许多现有的方法是不充分的,因为在他们的方案中,他们假设所有的用户都是可以信任的,这是不切实际的。因此,现有的方法无法抵抗查询抽样攻击和自我背叛攻击。除此之外,它们也没有考虑位置语义,因此容易受到位置同质性攻击。为了解决这一问题,我们引入了USLD (user similar location diversity,用户相似位置多样性)的概念,考虑了部分用户不可信的情况,我们选择的候选用户可能处于语义相同的位置。我们认为一些用户是不值得信任的,提出了与真实用户具有相似隐私设置的用户比其他用户更可信的想法。我们使用调整余弦相似度来选择与真实用户相似的用户,并使用地球移动距离来计算位置语义。该方法可以很好地抵抗查询抽样攻击、自我背叛攻击、位置同质性攻击。实验表明,该方法是非常实用的。
{"title":"USLD: A New Approach for Preserving Location Privacy in LBS","authors":"M. Ma, Yuejin Du","doi":"10.1109/WISA.2017.27","DOIUrl":"https://doi.org/10.1109/WISA.2017.27","url":null,"abstract":"The development of LBS bring great convenience to our lives, but also presents new challenges to privacy protection. Many of the existing methods are inadequate because in their schemes they assume that all users can be trusted which is not practical. So, the existing methods can not resist the query sampling attack and self-betrayal attacks. In addition to this, they also did not take the location semantic into account, so they are vulnerable to location homogeneity attacks. In order to solve the problems, we introduce the concept of USLD (user similar location diversity), we consider the scenario that part of the users are not trusted, and the users which we choose as candidates may in the locations which have same semantic. We consider some of users to be untrustworthy, propose the idea that users who have similar privacy settings with the real user are more plausible than others. We select users who are similar with the real user using Adjusted Cosine Similarity, and the Earth Mover Distance is used to calculate location semantics. Our method can well resist query sampling attacks, self-betrayal attacks, location homogeneity attacks. Experiments show that our method is very practical.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129304947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digging rich knowledge from clinical texts becomes a popular topic today. Knowledge graph has been widely used to integrate and manage abundant knowledge. Entity recognition and relation extraction play important roles in constructing knowledge graphs. In this paper, we develop a system to recognize entities and extract their relations from clinical texts in Electronic Medical Records. Our system implements four major functions: manual entity annotation, automatic entity recognition, manual relation annotation and automatic relation extraction. Tools of entity annotation and relation annotation are designed for professionals to help them manually annotate objects given original clinical texts. Moreover, entity recognition and relation recognition, which CRF and CNN are applied in, are accessible for professionals before manual annotation in order to increase the efficiency. Our system has been used in several applications, such as medical knowledge graph construction and health QA system.
{"title":"A System for Recognizing Entities and Extracting Relations from Electronic Medical Records","authors":"Chi Chen, Hongxia Liu, Chunxiao Xing","doi":"10.1109/WISA.2017.54","DOIUrl":"https://doi.org/10.1109/WISA.2017.54","url":null,"abstract":"Digging rich knowledge from clinical texts becomes a popular topic today. Knowledge graph has been widely used to integrate and manage abundant knowledge. Entity recognition and relation extraction play important roles in constructing knowledge graphs. In this paper, we develop a system to recognize entities and extract their relations from clinical texts in Electronic Medical Records. Our system implements four major functions: manual entity annotation, automatic entity recognition, manual relation annotation and automatic relation extraction. Tools of entity annotation and relation annotation are designed for professionals to help them manually annotate objects given original clinical texts. Moreover, entity recognition and relation recognition, which CRF and CNN are applied in, are accessible for professionals before manual annotation in order to increase the efficiency. Our system has been used in several applications, such as medical knowledge graph construction and health QA system.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"42 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120913657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yebing Luo, Tiezheng Nie, Derong Shen, Yue Kou, Ge Yu
With the volume of data grows rapidly, the cost of detecting duplication entities has increased significantly in data cleaning. However, some real-time applications only need to identify as many duplicate entities as possible in a limited time, rather than all of them. The existing works adopt the sorting method to divide similar records into blocks, and arrange the processing order of blocks to detect duplicate entity progressively. However, this method only works well when the attributes of records are suitable for sorting. Therefore, this paper proposes a novel progressive de-duplicate method for records that can't be sorted by their attributes. The method distributes records into different blocks based on their features and generates a modified bloom filter index for each block. Then it uses the bloom filter to predict the probability of duplicate entities in this block, which determines the processing order of blocks to detect the duplicate entities more quickly. The comprehensive experiment shows that the number of duplicate detection by this algorithm in the finite time is far more efficient than other algorithms involved in the related works.
{"title":"A Progressive Method for Detecting Duplication Entities Based on Bloom Filters","authors":"Yebing Luo, Tiezheng Nie, Derong Shen, Yue Kou, Ge Yu","doi":"10.1109/WISA.2017.11","DOIUrl":"https://doi.org/10.1109/WISA.2017.11","url":null,"abstract":"With the volume of data grows rapidly, the cost of detecting duplication entities has increased significantly in data cleaning. However, some real-time applications only need to identify as many duplicate entities as possible in a limited time, rather than all of them. The existing works adopt the sorting method to divide similar records into blocks, and arrange the processing order of blocks to detect duplicate entity progressively. However, this method only works well when the attributes of records are suitable for sorting. Therefore, this paper proposes a novel progressive de-duplicate method for records that can't be sorted by their attributes. The method distributes records into different blocks based on their features and generates a modified bloom filter index for each block. Then it uses the bloom filter to predict the probability of duplicate entities in this block, which determines the processing order of blocks to detect the duplicate entities more quickly. The comprehensive experiment shows that the number of duplicate detection by this algorithm in the finite time is far more efficient than other algorithms involved in the related works.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127967773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The insufficient real-time responses, accuracy and intelligence have become key issues in the practical application of traffic guidance information services. This paper addresses these issues by proposing a new dynamic route guidance method. It firstly establishes a concurrent global route search method. By using this method, multiple relative static shortest routes can be searched, and then the shortest global optimized route is obtained for the current traffic flow. Secondly, by using the sliding window model, the method extracts the real-time traffic data stream reflected by the spatial and temporal changes in location of vehicles. By combining with the hidden Markov model, the method can also be used for the forecast of short-term traffic states and the decision-making of whether local planning is necessary.
{"title":"Research on Short-Time Prediction of Dynamical Local Replanning Route Guidance Method Based on HMM","authors":"Yongmei Zhao, Hongmei Zhang","doi":"10.1109/WISA.2017.32","DOIUrl":"https://doi.org/10.1109/WISA.2017.32","url":null,"abstract":"The insufficient real-time responses, accuracy and intelligence have become key issues in the practical application of traffic guidance information services. This paper addresses these issues by proposing a new dynamic route guidance method. It firstly establishes a concurrent global route search method. By using this method, multiple relative static shortest routes can be searched, and then the shortest global optimized route is obtained for the current traffic flow. Secondly, by using the sliding window model, the method extracts the real-time traffic data stream reflected by the spatial and temporal changes in location of vehicles. By combining with the hidden Markov model, the method can also be used for the forecast of short-term traffic states and the decision-making of whether local planning is necessary.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129176260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When the traditional collaborative filtering algorithm faces high sparse data, its precision and quality of recommendation become unsatisfied. With the development of social networks, it is possible to selectively fill the missing value in the user-item matrix by using the friendship or trust relationship information of social networks. According to the memory-based collaborative filtering algorithm, in the paper, the two steps which are similarity calculation and user rating prediction are taken into account. Besides, this paper has filled appropriately the missing value and improved memory-based collaborative filtering recommendation algorithms to integrate the social relations. The experiment on the Epinions dataset shows that the improved algorithm can effectively alleviate the sparsity problem of user rating data and perform better than other classic algorithms in RMSE and MAP evaluation metrics.
{"title":"A Collaborative Filtering Recommendation Algorithm for Social Interaction","authors":"Jinglong Zhang, Mengxing Huang, Yu Zhang","doi":"10.1109/WISA.2017.26","DOIUrl":"https://doi.org/10.1109/WISA.2017.26","url":null,"abstract":"When the traditional collaborative filtering algorithm faces high sparse data, its precision and quality of recommendation become unsatisfied. With the development of social networks, it is possible to selectively fill the missing value in the user-item matrix by using the friendship or trust relationship information of social networks. According to the memory-based collaborative filtering algorithm, in the paper, the two steps which are similarity calculation and user rating prediction are taken into account. Besides, this paper has filled appropriately the missing value and improved memory-based collaborative filtering recommendation algorithms to integrate the social relations. The experiment on the Epinions dataset shows that the improved algorithm can effectively alleviate the sparsity problem of user rating data and perform better than other classic algorithms in RMSE and MAP evaluation metrics.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132024916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional business can't provide suitable decision for manager, and can't provide personal service for customer. With computing and network technology development, cloud computing and big data technology now is popularly used in many fields. This paper provides a smart business cloud system based on Hadoop. Firstly, it uses Hadoop to build up a cloud computing system, it can provide powerful storage and computing ability, then uses big data mining to analyze these huge data and get rules or knowledge. For a smart business system, it not only need compute or analyze, it need provide ways to collect data and push message to customers. In this smart system, it provides the module that can collect data by many sensors and network, and uses Android system to push personal and valuable message to different customers.
{"title":"Smart Business Cloud Based on Hadoop","authors":"Ouyang Hao, Wang Zhi Wen, H. Jin, H. Ping","doi":"10.1109/WISA.2017.20","DOIUrl":"https://doi.org/10.1109/WISA.2017.20","url":null,"abstract":"Traditional business can't provide suitable decision for manager, and can't provide personal service for customer. With computing and network technology development, cloud computing and big data technology now is popularly used in many fields. This paper provides a smart business cloud system based on Hadoop. Firstly, it uses Hadoop to build up a cloud computing system, it can provide powerful storage and computing ability, then uses big data mining to analyze these huge data and get rules or knowledge. For a smart business system, it not only need compute or analyze, it need provide ways to collect data and push message to customers. In this smart system, it provides the module that can collect data by many sensors and network, and uses Android system to push personal and valuable message to different customers.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134023555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Influence maximization is a very hot research in social network. However, it is difficult to find a good algorithm to keep balance between the time complexity and computing result' accuracy. In order to solve this problem, in this paper, we propose two new algorithms. Firstly, we present a heuristic algorithm based on the greedy algorithm, which can reduce the time complexity a lot and it will have a good result, too. Then, we present another new algorithm. We use the k-means idea to solve the IM problem. We use the k-means idea to find s seed nodes. At the same time, we prove these two new algorithms.
{"title":"New Influence Maximization Algorithm Research in Big Graph","authors":"Guigang Zhang, Sujie Li, Jian Wang, Ping Liu, Yibing Chen, Yunchuan Luo","doi":"10.1109/WISA.2017.50","DOIUrl":"https://doi.org/10.1109/WISA.2017.50","url":null,"abstract":"Influence maximization is a very hot research in social network. However, it is difficult to find a good algorithm to keep balance between the time complexity and computing result' accuracy. In order to solve this problem, in this paper, we propose two new algorithms. Firstly, we present a heuristic algorithm based on the greedy algorithm, which can reduce the time complexity a lot and it will have a good result, too. Then, we present another new algorithm. We use the k-means idea to solve the IM problem. We use the k-means idea to find s seed nodes. At the same time, we prove these two new algorithms.","PeriodicalId":204706,"journal":{"name":"2017 14th Web Information Systems and Applications Conference (WISA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128469815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}