Existing Searchable Encryption (SE) solutions are able to handle simple boolean search queries, such as single or multi-keyword queries, but cannot handle substring search queries over encrypted data that also involves identifying the position of the substring within the document. These types of queries are relevant in areas such as searching DNA data. In this paper, we propose a tree-based Substring Position Searchable Symmetric Encryption (SSP-SSE) to overcome the existing gap. Our solution efficiently finds occurrences of a substrings over encrypted cloud data. We formally define the leakage functions and security properties of SSP-SSE. Then, we prove that the proposed scheme is secure against chosen-keyword attacks that involve an adaptive adversary. Our analysis demonstrates that SSP-SSE introduces very low overhead on computation and storage.
{"title":"Substring Position Search over Encrypted Cloud Data Using Tree-Based Index","authors":"M. Strizhov, I. Ray","doi":"10.1109/IC2E.2015.33","DOIUrl":"https://doi.org/10.1109/IC2E.2015.33","url":null,"abstract":"Existing Searchable Encryption (SE) solutions are able to handle simple boolean search queries, such as single or multi-keyword queries, but cannot handle substring search queries over encrypted data that also involves identifying the position of the substring within the document. These types of queries are relevant in areas such as searching DNA data. In this paper, we propose a tree-based Substring Position Searchable Symmetric Encryption (SSP-SSE) to overcome the existing gap. Our solution efficiently finds occurrences of a substrings over encrypted cloud data. We formally define the leakage functions and security properties of SSP-SSE. Then, we prove that the proposed scheme is secure against chosen-keyword attacks that involve an adaptive adversary. Our analysis demonstrates that SSP-SSE introduces very low overhead on computation and storage.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132937781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud providers face the challenge of efficiently managing their infrastructure through minimizing resource consumption while allocating requests such that their profit is maximized. We address this challenge by designing a greedy approximation algorithm for solving the multi-resource sharing-aware virtual machine maximization (MSAVMM) problem. The MSAVMM problem requires determining the set of VMs that can be instantiated on a given server such that the profit derived from hosting the VMs is maximized. The solution to this problem has to consider the sharing of memory pages among VMs and the restricted capacities of each type of resource requested by the VMs. We analyze the performance of the proposed algorithm by determining its approximation ratio and by performing extensive experiments against other sharing-aware VM allocation algorithms.
{"title":"A Multi-resource Sharing-Aware Approximation Algorithm for Virtual Machine Maximization","authors":"Safraz Rampersaud, Daniel Grosu","doi":"10.1109/IC2E.2015.20","DOIUrl":"https://doi.org/10.1109/IC2E.2015.20","url":null,"abstract":"Cloud providers face the challenge of efficiently managing their infrastructure through minimizing resource consumption while allocating requests such that their profit is maximized. We address this challenge by designing a greedy approximation algorithm for solving the multi-resource sharing-aware virtual machine maximization (MSAVMM) problem. The MSAVMM problem requires determining the set of VMs that can be instantiated on a given server such that the profit derived from hosting the VMs is maximized. The solution to this problem has to consider the sharing of memory pages among VMs and the restricted capacities of each type of resource requested by the VMs. We analyze the performance of the proposed algorithm by determining its approximation ratio and by performing extensive experiments against other sharing-aware VM allocation algorithms.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127384588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DevOps is an emerging paradigm to actively foster the collaboration between system developers and operations in order to enable efficient end-to-end automation of software deployment and management processes. DevOps is typically combined with Cloud computing, which enables rapid, on-demand provisioning of underlying resources such as virtual servers, storage, or database instances using APIs in a self-service manner. Today, an ever-growing amount of DevOps tools, reusable artifacts such as scripts, and Cloud services are available to implement DevOps automation. Thus, informed decision making on the appropriate approach (es) for the needs of an application is hard. In this work we present a collaborative and holistic approach to capture DevOps knowledge in a knowledgebase. Beside the ability to capture expert knowledge and utilize crowd sourcing approaches, we implemented a crawling framework to automatically discover and capture DevOps knowledge. Moreover, we show how this knowledge is utilized to deploy and operate Cloud applications.
{"title":"Automated Capturing and Systematic Usage of DevOps Knowledge for Cloud Applications","authors":"Johannes Wettinger, V. Andrikopoulos, F. Leymann","doi":"10.1109/IC2E.2015.23","DOIUrl":"https://doi.org/10.1109/IC2E.2015.23","url":null,"abstract":"DevOps is an emerging paradigm to actively foster the collaboration between system developers and operations in order to enable efficient end-to-end automation of software deployment and management processes. DevOps is typically combined with Cloud computing, which enables rapid, on-demand provisioning of underlying resources such as virtual servers, storage, or database instances using APIs in a self-service manner. Today, an ever-growing amount of DevOps tools, reusable artifacts such as scripts, and Cloud services are available to implement DevOps automation. Thus, informed decision making on the appropriate approach (es) for the needs of an application is hard. In this work we present a collaborative and holistic approach to capture DevOps knowledge in a knowledgebase. Beside the ability to capture expert knowledge and utilize crowd sourcing approaches, we implemented a crawling framework to automatically discover and capture DevOps knowledge. Moreover, we show how this knowledge is utilized to deploy and operate Cloud applications.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115579219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wolfgang Gerlach, Wei Tang, Andreas Wilke, Dan Olson, Folker Meyer
Recently, Linux container technology has been gaining attention as it promises to transform the way software is developed and deployed. The portability and ease of deployment makes Linux containers an ideal technology to be used in scientific workflow platforms. AWE/Shock is a scalable data analysis platform designed to execute data intensive scientific workflows. Recently we introduced Skyport, an extension to AWE/Shock, that uses Docker container technology to orchestrate and automate the deployment of individual workflow tasks onto the worker machines. The installation of software in independent execution environments for each task reduces complexity and offers an elegant solution to installation problems such as library version conflicts. The systematic use of isolated execution environments for workflow tasks also offers a convenient and simple mechanism to reproduce scientific results.
{"title":"Container Orchestration for Scientific Workflows","authors":"Wolfgang Gerlach, Wei Tang, Andreas Wilke, Dan Olson, Folker Meyer","doi":"10.1109/IC2E.2015.87","DOIUrl":"https://doi.org/10.1109/IC2E.2015.87","url":null,"abstract":"Recently, Linux container technology has been gaining attention as it promises to transform the way software is developed and deployed. The portability and ease of deployment makes Linux containers an ideal technology to be used in scientific workflow platforms. AWE/Shock is a scalable data analysis platform designed to execute data intensive scientific workflows. Recently we introduced Skyport, an extension to AWE/Shock, that uses Docker container technology to orchestrate and automate the deployment of individual workflow tasks onto the worker machines. The installation of software in independent execution environments for each task reduces complexity and offers an elegant solution to installation problems such as library version conflicts. The systematic use of isolated execution environments for workflow tasks also offers a convenient and simple mechanism to reproduce scientific results.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126119357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today the amount of data that is being processed is growing manyfold. Fast and scalable data processing systems are the need of the hour because of the data deluge. Indexing is a very common mechanism used in data processing systems for fast and efficient search of the data. In many systems, the I/O needed to read and fetch the relevant part of the index into the main memory dominates the overall query processing cost. My research is focused on reducing this I/O cost by effective indexing algorithms. I have particularly focused on using bitmap indices, which are a very efficient indexing mechanism particularly used in data warehouse environments due to their high compressibility and ability to perform bitwise operations even on compressed bitmaps. Column-store architecture is preferred in such environments because of their ability to leverage bitmap indices. Column domains are often hierarchical in nature, and hence using hierarchical bitmap indices is often beneficial. I have designed algorithms for choosing a subset of these hierarchical bitmap indices for 1D as well as spatial data in order to execute range query workloads for various different scenarios. I have shown experimentally that these solutions are very efficient and scalable. Currently, I am focusing on leveraging hierarchical bitmap indices to solve approximate nearest neighbor queries.
{"title":"Compressed Hierarchical Bitmaps for Efficiently Processing Different Query Workloads","authors":"P. Nagarkar","doi":"10.1109/IC2E.2015.99","DOIUrl":"https://doi.org/10.1109/IC2E.2015.99","url":null,"abstract":"Today the amount of data that is being processed is growing manyfold. Fast and scalable data processing systems are the need of the hour because of the data deluge. Indexing is a very common mechanism used in data processing systems for fast and efficient search of the data. In many systems, the I/O needed to read and fetch the relevant part of the index into the main memory dominates the overall query processing cost. My research is focused on reducing this I/O cost by effective indexing algorithms. I have particularly focused on using bitmap indices, which are a very efficient indexing mechanism particularly used in data warehouse environments due to their high compressibility and ability to perform bitwise operations even on compressed bitmaps. Column-store architecture is preferred in such environments because of their ability to leverage bitmap indices. Column domains are often hierarchical in nature, and hence using hierarchical bitmap indices is often beneficial. I have designed algorithms for choosing a subset of these hierarchical bitmap indices for 1D as well as spatial data in order to execute range query workloads for various different scenarios. I have shown experimentally that these solutions are very efficient and scalable. Currently, I am focusing on leveraging hierarchical bitmap indices to solve approximate nearest neighbor queries.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131657174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big data processing is one of the killer applications for cloud systems. MapReduce systems such as Hadoop are the most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in cloud data processing, which not only has serious impact on the integrity of individual application results but also affects the performance and availability of the whole data processing system. In this paper, we present a comprehensive study on 138 real world data corruption incidents reported in Hadoop bug repositories. We characterize those data corruption problems in four aspects: 1) what impact can data corruption have on the application and system? 2) how is data corruption detected? 3) what are the causes of the data corruption? and 4) what problems can occur while attempting to handle data corruption? Our study has made the following findings: 1) the impact of data corruption is not limited to data integrity, 2) existing data corruption detection schemes are quite insufficient: only 25% of data corruption problems are correctly reported, 42% are silent data corruption without any error message, and 21% receive imprecise error report. We also found the detection system raised 12% false alarms, 3) there are various causes of data corruption such as improper runtime checking, race conditions, inconsistent block states, improper network failure handling, and improper node crash handling, and 4) existing data corruption handling mechanisms (i.e., data replication, replica deletion, simple re-execution) make frequent mistakes including replicating corrupted data blocks, deleting uncorrupted data blocks, or causing undesirable resource hogging.
{"title":"Understanding Real World Data Corruptions in Cloud Systems","authors":"Peipei Wang, D. Dean, Xiaohui Gu","doi":"10.1109/IC2E.2015.41","DOIUrl":"https://doi.org/10.1109/IC2E.2015.41","url":null,"abstract":"Big data processing is one of the killer applications for cloud systems. MapReduce systems such as Hadoop are the most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in cloud data processing, which not only has serious impact on the integrity of individual application results but also affects the performance and availability of the whole data processing system. In this paper, we present a comprehensive study on 138 real world data corruption incidents reported in Hadoop bug repositories. We characterize those data corruption problems in four aspects: 1) what impact can data corruption have on the application and system? 2) how is data corruption detected? 3) what are the causes of the data corruption? and 4) what problems can occur while attempting to handle data corruption? Our study has made the following findings: 1) the impact of data corruption is not limited to data integrity, 2) existing data corruption detection schemes are quite insufficient: only 25% of data corruption problems are correctly reported, 42% are silent data corruption without any error message, and 21% receive imprecise error report. We also found the detection system raised 12% false alarms, 3) there are various causes of data corruption such as improper runtime checking, race conditions, inconsistent block states, improper network failure handling, and improper node crash handling, and 4) existing data corruption handling mechanisms (i.e., data replication, replica deletion, simple re-execution) make frequent mistakes including replicating corrupted data blocks, deleting uncorrupted data blocks, or causing undesirable resource hogging.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129199109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agile methods are criticized to be inadequate for developing secure digital services. Currently, the softwareresearch community only partially studies security for agile practices. Our more holistic approach is identifying the security challenges / benefits of agile practices that relate to the core "embrace-changes" principle. For this case-study based research, we consider eXtreme Programming (XP) for a holistic security integration into agile practices.
{"title":"Towards Secure Agile Agent-Oriented System Design","authors":"S. H. Adelyar","doi":"10.1109/IC2E.2015.95","DOIUrl":"https://doi.org/10.1109/IC2E.2015.95","url":null,"abstract":"Agile methods are criticized to be inadequate for developing secure digital services. Currently, the softwareresearch community only partially studies security for agile practices. Our more holistic approach is identifying the security challenges / benefits of agile practices that relate to the core \"embrace-changes\" principle. For this case-study based research, we consider eXtreme Programming (XP) for a holistic security integration into agile practices.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123073609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikola Tanković, Tihana Galinac Grbac, Hong Linh Truong, S. Dustdar
There exists a huge amount of vertical applications that are developed for isolated computing environments. Due to increasing demand for additional resources there is a clear need to adapt these applications to the distributed environments. However, this is not an easy task and numerous variants are possible. Moreover, in this transition a new quality requirements become important, such as application elasticity. Application elasticity has to be built into a software system to enable smooth cost optimization at the run-time. In this paper, we provide a framework for evaluating different transformation variants of vertical Java EE multi-tiered applications into elastic cloud applications. With support of this framework the software developer is guided how to transform its application achieving optimal elasticity strategy. The framework is evaluated on slicing and evaluating elasticity of existing SaaS multi-tiered Java application used in Croatian market.
{"title":"Transforming Vertical Web Applications into Elastic Cloud Applications","authors":"Nikola Tanković, Tihana Galinac Grbac, Hong Linh Truong, S. Dustdar","doi":"10.1109/IC2E.2015.15","DOIUrl":"https://doi.org/10.1109/IC2E.2015.15","url":null,"abstract":"There exists a huge amount of vertical applications that are developed for isolated computing environments. Due to increasing demand for additional resources there is a clear need to adapt these applications to the distributed environments. However, this is not an easy task and numerous variants are possible. Moreover, in this transition a new quality requirements become important, such as application elasticity. Application elasticity has to be built into a software system to enable smooth cost optimization at the run-time. In this paper, we provide a framework for evaluating different transformation variants of vertical Java EE multi-tiered applications into elastic cloud applications. With support of this framework the software developer is guided how to transform its application achieving optimal elasticity strategy. The framework is evaluated on slicing and evaluating elasticity of existing SaaS multi-tiered Java application used in Croatian market.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132740619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today, cloud computing engines such as stream-processing Storm and batch-processing Hadoop are being increasingly run atop software-defined networks (SDNs). In such cloud stacks, the scheduler of the application engine (which allocates tasks to servers) remains decoupled from the SDN scheduler (which allocates network routes). We propose a new approach that performs cross-layer scheduling between the application layer and the networking layer. This coordinated scheduling orchestrates the placement of application tasks (e.g., Hadoop maps and reduces, or Storm bolts) in tandem with the selection of network routes that arise from these tasks. We present results from both cluster deployment and simulation, and using two representative network topologies: Fat-tree and Jellyfish. Our results show that cross-layer scheduling can improve throughput of Hadoop and Storm by between 26% to 34% in a 30-host cluster, and it scales well.
{"title":"Cross-Layer Scheduling in Cloud Systems","authors":"H. Alkaff, Indranil Gupta, Luke M. Leslie","doi":"10.1109/IC2E.2015.36","DOIUrl":"https://doi.org/10.1109/IC2E.2015.36","url":null,"abstract":"Today, cloud computing engines such as stream-processing Storm and batch-processing Hadoop are being increasingly run atop software-defined networks (SDNs). In such cloud stacks, the scheduler of the application engine (which allocates tasks to servers) remains decoupled from the SDN scheduler (which allocates network routes). We propose a new approach that performs cross-layer scheduling between the application layer and the networking layer. This coordinated scheduling orchestrates the placement of application tasks (e.g., Hadoop maps and reduces, or Storm bolts) in tandem with the selection of network routes that arise from these tasks. We present results from both cluster deployment and simulation, and using two representative network topologies: Fat-tree and Jellyfish. Our results show that cross-layer scheduling can improve throughput of Hadoop and Storm by between 26% to 34% in a 30-host cluster, and it scales well.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"397 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117112125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ala Darabseh, M. Al-Ayyoub, Y. Jararweh, E. Benkhelifa, M. Vouk, A. Rindos
With the rapid growth of data centers and the unprecedented increase in storage demands, the traditional storage control techniques are considered unsuitable to deal with this large volume of data in an efficient manner. The Software Defined Storage (SDStore) comes as a solution for this issue by abstracting the storage control operations from the storage devices and set it inside a centralized controller in the software layer. Building a real SDStore system without any simulation and emulation is considered an expensive solution and may have a lot of risks. Thus, there is a need to simulate such systems before the real-life implementation and deployment. In this paper we present SDStorage, an experimental framework to provide a novel virtualized test bed environment for SDStore systems. The main idea of SDStorage is based on the Mininet Software Defined Network (SDN) Open Flow simulator and is built over of it. The main components of Mininet, which are the host, the switch and the controller, are customized to serve the needs of SDStore simulation environments.
{"title":"SDStorage: A Software Defined Storage Experimental Framework","authors":"Ala Darabseh, M. Al-Ayyoub, Y. Jararweh, E. Benkhelifa, M. Vouk, A. Rindos","doi":"10.1109/IC2E.2015.60","DOIUrl":"https://doi.org/10.1109/IC2E.2015.60","url":null,"abstract":"With the rapid growth of data centers and the unprecedented increase in storage demands, the traditional storage control techniques are considered unsuitable to deal with this large volume of data in an efficient manner. The Software Defined Storage (SDStore) comes as a solution for this issue by abstracting the storage control operations from the storage devices and set it inside a centralized controller in the software layer. Building a real SDStore system without any simulation and emulation is considered an expensive solution and may have a lot of risks. Thus, there is a need to simulate such systems before the real-life implementation and deployment. In this paper we present SDStorage, an experimental framework to provide a novel virtualized test bed environment for SDStore systems. The main idea of SDStorage is based on the Mininet Software Defined Network (SDN) Open Flow simulator and is built over of it. The main components of Mininet, which are the host, the switch and the controller, are customized to serve the needs of SDStore simulation environments.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123458526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}