Indoor localization becomes more popular along with the rapid growth of technology dan information system. The research has been conducted in many areas, especially in algorithm. Based on the need for knowledge of training data, Fingerprinting algorithm is categorized as the one that works with it. Training data is then computed with the machine learning approach, Naïve Bayes. Naïve Bayes is a simple and efficient classifier to estimate location. This study conducted an experiment with Naïve Bayes in order to classify unknown location of object based on the signal strength of Bluetooth low energy. It required 2 processes, collecting training data and evaluating test data. The result of the analysis with Naïve Bayes showed that the algorithm works well to estimate the right position of an object regarding its class.
{"title":"Naïve Bayes Classifier for Indoor Positioning using Bluetooth Low Energy","authors":"Dzata Farahiyah, Rifky Mukti Romadhoni, Setyawan Wahyu Pratomo","doi":"10.1145/3299819.3299842","DOIUrl":"https://doi.org/10.1145/3299819.3299842","url":null,"abstract":"Indoor localization becomes more popular along with the rapid growth of technology dan information system. The research has been conducted in many areas, especially in algorithm. Based on the need for knowledge of training data, Fingerprinting algorithm is categorized as the one that works with it. Training data is then computed with the machine learning approach, Naïve Bayes. Naïve Bayes is a simple and efficient classifier to estimate location. This study conducted an experiment with Naïve Bayes in order to classify unknown location of object based on the signal strength of Bluetooth low energy. It required 2 processes, collecting training data and evaluating test data. The result of the analysis with Naïve Bayes showed that the algorithm works well to estimate the right position of an object regarding its class.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127837525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Short-term load forecasting is an important basic work for the normal operation and control of power systems. The results of power load forecasting have a great impact on dispatching operation of the power system and the production operation of the enterprise. Accurate load forecasting would help improve the safety and stability of power system and save the cost of enterprise. In order to extract the effective information contained in the data and improve the accuracy of short-term load forecasting, this paper proposes a long-short term memory neural network model (LSTM) with deep learning ability for short-term load forecasting combined with clustering algorithm. Deep learning is in line with the trend of big data and has a strong ability to learn and summarize large amounts of data. Through the research on the characteristics and influencing factors of the characteristic enterprises, the collected samples are clustered to establish similar day sets. This paper also studies the impact of different types of load data on prediction and the actual problem of input training sample selection. The LSTM prediction model is built with subdividing and clustering the input load sample set. Compared with other traditional methods, the results prove that LSTM proposed has higher accuracy and applicability.
{"title":"Application of Deep Learning Method in Short-term Load Forecasting of Characteristic Enterprises","authors":"Yuchen Dou, Xinman Zhang, Zhihui Wu, Hang Zhang","doi":"10.1145/3299819.3299849","DOIUrl":"https://doi.org/10.1145/3299819.3299849","url":null,"abstract":"Short-term load forecasting is an important basic work for the normal operation and control of power systems. The results of power load forecasting have a great impact on dispatching operation of the power system and the production operation of the enterprise. Accurate load forecasting would help improve the safety and stability of power system and save the cost of enterprise. In order to extract the effective information contained in the data and improve the accuracy of short-term load forecasting, this paper proposes a long-short term memory neural network model (LSTM) with deep learning ability for short-term load forecasting combined with clustering algorithm. Deep learning is in line with the trend of big data and has a strong ability to learn and summarize large amounts of data. Through the research on the characteristics and influencing factors of the characteristic enterprises, the collected samples are clustered to establish similar day sets. This paper also studies the impact of different types of load data on prediction and the actual problem of input training sample selection. The LSTM prediction model is built with subdividing and clustering the input load sample set. Compared with other traditional methods, the results prove that LSTM proposed has higher accuracy and applicability.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127365819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Cloud computing is a new paradigm for offering computing services via the Internet. Customers can lease infrastructure resources from cloud providers, such as CPU core, memory and disk storage, based on a "pay as you require" model. The approach in this paper is about distributing the resources (storage, processor, memory) of cloud providers to the customers by efficient manner, satisfying parties in terms of providing requirements and guarantee efficient and fair distribution of the resources. The approach system consists of two phases. In the first phase, we will create an interface in order to allow both customers and providers to insert their inputs. The system will allocate customers' demands based on the availability of the provider resources. In the second phase, the system will start to monitor the customers' usage of the resources to determine whether the customers using all the resources that have been allocated to them or did not. Then the system will reallocate the VMs resources that have not been used for a while to other customers. This will lead to reduce the cost and increase the provider profits.
{"title":"An Efficient Allocation of Cloud Computing Resources","authors":"Sultan Alshamrani","doi":"10.1145/3299819.3299828","DOIUrl":"https://doi.org/10.1145/3299819.3299828","url":null,"abstract":"The Cloud computing is a new paradigm for offering computing services via the Internet. Customers can lease infrastructure resources from cloud providers, such as CPU core, memory and disk storage, based on a \"pay as you require\" model. The approach in this paper is about distributing the resources (storage, processor, memory) of cloud providers to the customers by efficient manner, satisfying parties in terms of providing requirements and guarantee efficient and fair distribution of the resources. The approach system consists of two phases. In the first phase, we will create an interface in order to allow both customers and providers to insert their inputs. The system will allocate customers' demands based on the availability of the provider resources. In the second phase, the system will start to monitor the customers' usage of the resources to determine whether the customers using all the resources that have been allocated to them or did not. Then the system will reallocate the VMs resources that have not been used for a while to other customers. This will lead to reduce the cost and increase the provider profits.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121184608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jin Ma, Sung-Chan Park, Jung-Hun Shin, Nam Gyu Kim, Jerry H. Seo, Jong-Suk Ruth Lee, J. Sa
In recent years, artificial intelligence (AI) has become a trend all over the world. This trend has led to the application and development of intelligent system that apply AI. In this paper, we describe a system architecture that uses AI, on a platform called EDISON, for computer science and engineering research. This architecture can be used to develop intelligent systems and can support applications in various fields by assisting in the development of algorithms and computer code. In this paper, we demonstrate the scalability of the proposed architecture on EDISON using different languages and application examples from various fields.
{"title":"AI based intelligent system on the EDISON platform","authors":"Jin Ma, Sung-Chan Park, Jung-Hun Shin, Nam Gyu Kim, Jerry H. Seo, Jong-Suk Ruth Lee, J. Sa","doi":"10.1145/3299819.3299843","DOIUrl":"https://doi.org/10.1145/3299819.3299843","url":null,"abstract":"In recent years, artificial intelligence (AI) has become a trend all over the world. This trend has led to the application and development of intelligent system that apply AI. In this paper, we describe a system architecture that uses AI, on a platform called EDISON, for computer science and engineering research. This architecture can be used to develop intelligent systems and can support applications in various fields by assisting in the development of algorithms and computer code. In this paper, we demonstrate the scalability of the proposed architecture on EDISON using different languages and application examples from various fields.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127477648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In modern times, buildings are heavily contributing to the overall energy consumption of the countries and in some countries they account up to 45% of their total energy consumption. Hence a detailed understanding of the dynamics of energy consumption of buildings and mining the typical daily electricity consumption profiles of households in buildings can open up new avenues for smart energy consumption profiling. This can open up newer business opportunities for all stakeholders in energy supply chain thereby supporting the energy management strategies in a smart grid environment and provide opportunities for improvement in building infrastructure with fault detection and diagnostics. In this context, we propose a approach to predict and re-engineer the hourly energy demand in a residential building. A data-driven system is proposed using machine learning techniques like Multi Linear Regression and Support Vector Machine to predict electricity demand in a smart building along with a real-time strategy to enable the users to save energy by recommending optimal scheduling of the appliances at times of peak load demand, given the consumer's constraints.
{"title":"SmartPeak: Peak Shaving and Ambient Analysis For Energy Efficiency in Electrical Smart Grid","authors":"Sourajit Behera, R. Misra","doi":"10.1145/3299819.3299833","DOIUrl":"https://doi.org/10.1145/3299819.3299833","url":null,"abstract":"In modern times, buildings are heavily contributing to the overall energy consumption of the countries and in some countries they account up to 45% of their total energy consumption. Hence a detailed understanding of the dynamics of energy consumption of buildings and mining the typical daily electricity consumption profiles of households in buildings can open up new avenues for smart energy consumption profiling. This can open up newer business opportunities for all stakeholders in energy supply chain thereby supporting the energy management strategies in a smart grid environment and provide opportunities for improvement in building infrastructure with fault detection and diagnostics. In this context, we propose a approach to predict and re-engineer the hourly energy demand in a residential building. A data-driven system is proposed using machine learning techniques like Multi Linear Regression and Support Vector Machine to predict electricity demand in a smart building along with a real-time strategy to enable the users to save energy by recommending optimal scheduling of the appliances at times of peak load demand, given the consumer's constraints.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125362649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azuan Ahmad, Wan Shafiuddin Zainudin, M. Kama, N. Idris, M. Saudi
Cloud computing introduces concerns about data protection and intrusion detection mechanism. A review of the literature shows that there is still a lack of works on cloud IDS that focused on implementing real-time hybrid detections using Dendritic Cell algorithm (DCA) as a practical approach. In addition, there is also lack of specific threat detection built to detect intrusions targeting cloud computing environment where current implementations still using traditional open source or enterprise IDS to detect threats targeting cloud computing environment. Cloud implementations also introduce a new term, "co-residency" attack and lack of research focusing on detecting this type of attack. This research aims to provide a hybrid intrusion detection model for Cloud computing environment. For this purpose, a modified DCA is proposed in this research as the main detection algorithm in the new hybrid intrusion detection mechanism which works on Cloud Co-Residency Threat Detection (CCTD) that combines anomaly and misuse detection mechanism. This research also proposed a method in detecting co-residency attacks. In this paper the co-residency attack detection model was proposed and tested until satisfactory results were obtained with the datasets. The experiment was conducted in a controlled environment and conducted using custom generated co-residency denial of service attacks for testing the capability of the proposed model in detecting novel co-residency attacks. The results show that the proposed model was able to detect most of the types of attacks that conducted during the experiment. From the experiment, the CCTD model has been shown to improve DCA previously used to solve similar problem.
{"title":"Cloud Co-Residency Denial of Service Threat Detection Inspired by Artificial Immune System","authors":"Azuan Ahmad, Wan Shafiuddin Zainudin, M. Kama, N. Idris, M. Saudi","doi":"10.1145/3299819.3299821","DOIUrl":"https://doi.org/10.1145/3299819.3299821","url":null,"abstract":"Cloud computing introduces concerns about data protection and intrusion detection mechanism. A review of the literature shows that there is still a lack of works on cloud IDS that focused on implementing real-time hybrid detections using Dendritic Cell algorithm (DCA) as a practical approach. In addition, there is also lack of specific threat detection built to detect intrusions targeting cloud computing environment where current implementations still using traditional open source or enterprise IDS to detect threats targeting cloud computing environment. Cloud implementations also introduce a new term, \"co-residency\" attack and lack of research focusing on detecting this type of attack. This research aims to provide a hybrid intrusion detection model for Cloud computing environment. For this purpose, a modified DCA is proposed in this research as the main detection algorithm in the new hybrid intrusion detection mechanism which works on Cloud Co-Residency Threat Detection (CCTD) that combines anomaly and misuse detection mechanism. This research also proposed a method in detecting co-residency attacks. In this paper the co-residency attack detection model was proposed and tested until satisfactory results were obtained with the datasets. The experiment was conducted in a controlled environment and conducted using custom generated co-residency denial of service attacks for testing the capability of the proposed model in detecting novel co-residency attacks. The results show that the proposed model was able to detect most of the types of attacks that conducted during the experiment. From the experiment, the CCTD model has been shown to improve DCA previously used to solve similar problem.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125886930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Destination prediction not only helps to understand users' behavior, but also provides basic information for destination-related customized service. This paper studies the destination prediction in the public bike sharing system, which is now blooming in many cities as an environment friendly short-distance transportation solution. Due to the large number of bike stations (e.g. more than 800 stations of Citi Bike in New York City), the accuracy and effectiveness of destination prediction becomes a problem, where clustering algorithm is often used to reduce the number of destinations. However, grouping bike stations according to their location is not effective enough. The contribution of the paper lies in two aspects: 1) Proposes a Compound Stations Clustering method that considers not only the geographic location but also the usage pattern; 2) Provide a framework that uses feature models and corresponding labels for machine learning algorithms to predict destination for on-going trips. Experiments are conducted on real-world data sets of Citi Bike in New York City through the year of 2017 and results show that our method outperforms baselines in accuracy.
{"title":"Cluster-Based Destination Prediction in Bike Sharing System","authors":"Pengcheng Dai, Changxiong Song, Huiping Lin, Pei Jia, Zhipeng Xu","doi":"10.1145/3299819.3299826","DOIUrl":"https://doi.org/10.1145/3299819.3299826","url":null,"abstract":"Destination prediction not only helps to understand users' behavior, but also provides basic information for destination-related customized service. This paper studies the destination prediction in the public bike sharing system, which is now blooming in many cities as an environment friendly short-distance transportation solution. Due to the large number of bike stations (e.g. more than 800 stations of Citi Bike in New York City), the accuracy and effectiveness of destination prediction becomes a problem, where clustering algorithm is often used to reduce the number of destinations. However, grouping bike stations according to their location is not effective enough. The contribution of the paper lies in two aspects: 1) Proposes a Compound Stations Clustering method that considers not only the geographic location but also the usage pattern; 2) Provide a framework that uses feature models and corresponding labels for machine learning algorithms to predict destination for on-going trips. Experiments are conducted on real-world data sets of Citi Bike in New York City through the year of 2017 and results show that our method outperforms baselines in accuracy.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123571239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we utilize ontology-based information extraction for semantic analysis and terminology linking from a corpus of software requirement specification documents from 400 enterprise-level software development projects. The purpose for this ontology is to perform semi-supervised learning on enterprise-level specification documents towards an automated method of defining productivity metrics for software development profiling. Profiling an enterprise-level software development project in the context of productivity is necessary in order to objectively measure productivity of a software development project and to identify areas of improvement in software development when compared to similar software development profiles or benchmark of these profiles. We developed a semi-novel methodology of applying NLP OBIE techniques towards determining software development productivity metrics, and evaluated this methodology on multiple practical enterprise-level software projects.
{"title":"Natural Language Processing for Productivity Metrics for Software Development Profiling in Enterprise Applications","authors":"Steven Delaney, Christopher Chan, Doug Smith","doi":"10.1145/3299819.3299830","DOIUrl":"https://doi.org/10.1145/3299819.3299830","url":null,"abstract":"In this paper, we utilize ontology-based information extraction for semantic analysis and terminology linking from a corpus of software requirement specification documents from 400 enterprise-level software development projects. The purpose for this ontology is to perform semi-supervised learning on enterprise-level specification documents towards an automated method of defining productivity metrics for software development profiling. Profiling an enterprise-level software development project in the context of productivity is necessary in order to objectively measure productivity of a software development project and to identify areas of improvement in software development when compared to similar software development profiles or benchmark of these profiles. We developed a semi-novel methodology of applying NLP OBIE techniques towards determining software development productivity metrics, and evaluated this methodology on multiple practical enterprise-level software projects.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115829119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many imitations of electronic components exist in the market. The PUF has attracted attention as countermeasures against these imitations. The 2-1 DAPUF is one of the PUFs which is suitable for FPGA implementation. However, it is reported that some PUFs are vulnerable to modeling attacks using feature extraction. Regarding the effectiveness of feature extraction, it has not been evaluated in the modeling attack against 2-1 DAPUF. This study evaluated the effectiveness of feature extraction by simulation and FPGA implementation. The results showed that the feature extraction was effective for modeling attacks against 2-1 DAPUF.
{"title":"Feature Extraction Driven Modeling Attack Against Double Arbiter PUF and Its Evaluation","authors":"Susumu Matsumi, Y. Nozaki, M. Yoshikawa","doi":"10.1145/3299819.3299835","DOIUrl":"https://doi.org/10.1145/3299819.3299835","url":null,"abstract":"Many imitations of electronic components exist in the market. The PUF has attracted attention as countermeasures against these imitations. The 2-1 DAPUF is one of the PUFs which is suitable for FPGA implementation. However, it is reported that some PUFs are vulnerable to modeling attacks using feature extraction. Regarding the effectiveness of feature extraction, it has not been evaluated in the modeling attack against 2-1 DAPUF. This study evaluated the effectiveness of feature extraction by simulation and FPGA implementation. The results showed that the feature extraction was effective for modeling attacks against 2-1 DAPUF.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115929102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, with the rise of exceptional cloud computing technologies, machine learning approach in solving complex problems has been greatly accelerated. In the field of text classification, machine learning is a technology of providing computers the ability to learn and predict tasks without being explicitly labeled, and it is said that enough data are needed in order to let a machine to learn. However, more data tend to cause overfitting in machine learning algorithms, and there is no object criteria in deciding how many samples are required to achieve a desired level of performance. This article addresses this problem by using feature selection method. In our experiments, feature selection is proved to be able to decrease 66.67% at the largest of the required size of training dataset. Meanwhile, the kappa coefficient as a performance measure of classifiers could increase 11 points at the maximum. Furthermore, feature selection as a technology to remove irrelevant features was found be able to prevent overfitting to a great extent.
{"title":"Do We Need More Training Samples For Text Classification?","authors":"Wanwan Zheng, Mingzhe Jin","doi":"10.1145/3299819.3299836","DOIUrl":"https://doi.org/10.1145/3299819.3299836","url":null,"abstract":"In recent years, with the rise of exceptional cloud computing technologies, machine learning approach in solving complex problems has been greatly accelerated. In the field of text classification, machine learning is a technology of providing computers the ability to learn and predict tasks without being explicitly labeled, and it is said that enough data are needed in order to let a machine to learn. However, more data tend to cause overfitting in machine learning algorithms, and there is no object criteria in deciding how many samples are required to achieve a desired level of performance. This article addresses this problem by using feature selection method. In our experiments, feature selection is proved to be able to decrease 66.67% at the largest of the required size of training dataset. Meanwhile, the kappa coefficient as a performance measure of classifiers could increase 11 points at the maximum. Furthermore, feature selection as a technology to remove irrelevant features was found be able to prevent overfitting to a great extent.","PeriodicalId":119217,"journal":{"name":"Artificial Intelligence and Cloud Computing Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114468016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}