One of the main contributions of the paper is that we introduce "performance as a service" as a key component for future cloud storage environments. This is achieved through demonstration of the design and implementation of a multi-tier cloud storage system (CACSS), and the illustration of a linear programming model that helps to predict future data access patterns for efficient data caching management. The proposed caching algorithm aims to leverage the cloud economy by incorporating both potential performance improvement and revenue-gain into the storage systems.
{"title":"Enabling Performance as a Service for a Cloud Storage System","authors":"Yang Li, Li Guo, A. Supratak, Yike Guo","doi":"10.1109/CLOUD.2014.80","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.80","url":null,"abstract":"One of the main contributions of the paper is that we introduce \"performance as a service\" as a key component for future cloud storage environments. This is achieved through demonstration of the design and implementation of a multi-tier cloud storage system (CACSS), and the illustration of a linear programming model that helps to predict future data access patterns for efficient data caching management. The proposed caching algorithm aims to leverage the cloud economy by incorporating both potential performance improvement and revenue-gain into the storage systems.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134373377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud API (Application Programming Interface) enables client applications to access services and manage resources hosted in the Cloud. To protect themselves and their customers, Cloud service providers (CSP) often require client authentication for each API call. The authentication usually depends on some kind of secret (or called API key), for example, secret access key, password, or access token. As such, the API key unlocks the door to the treasure inside the Cloud. Hence, protecting these keys is critical. It is a difficult task especially on the client side, such as users' computers or mobile devices. How do CSPs authenticate client applications? What are security risks of managing API keys in common practices? How can we mitigate these risks? This paper focuses on finding answers to these questions. By reviewing popular client authentication methods that CSPs use and using Cloud APIs as software developers, we identified various security risks associated with API keys. To mitigate these risks, we use hardware secure elements for secure key provisioning, storage, and usage. The solution replaces the manual key handling with end-to-end security between CSP and its customers' secure elements. This removes the root causes of the identified risks and enhances API security. It also enhances the usability by eliminating manual key operations and alleviating software developers' worries of working with cryptography.
{"title":"Keeping Your API Keys in a Safe","authors":"Hongqian Karen Lu","doi":"10.1109/CLOUD.2014.143","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.143","url":null,"abstract":"Cloud API (Application Programming Interface) enables client applications to access services and manage resources hosted in the Cloud. To protect themselves and their customers, Cloud service providers (CSP) often require client authentication for each API call. The authentication usually depends on some kind of secret (or called API key), for example, secret access key, password, or access token. As such, the API key unlocks the door to the treasure inside the Cloud. Hence, protecting these keys is critical. It is a difficult task especially on the client side, such as users' computers or mobile devices. How do CSPs authenticate client applications? What are security risks of managing API keys in common practices? How can we mitigate these risks? This paper focuses on finding answers to these questions. By reviewing popular client authentication methods that CSPs use and using Cloud APIs as software developers, we identified various security risks associated with API keys. To mitigate these risks, we use hardware secure elements for secure key provisioning, storage, and usage. The solution replaces the manual key handling with end-to-end security between CSP and its customers' secure elements. This removes the root causes of the identified risks and enhances API security. It also enhances the usability by eliminating manual key operations and alleviating software developers' worries of working with cryptography.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133710301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parallel/Distributed computing frameworks, such as MapReduce and Dryad, have been widely adopted to analyze massive data. Traditionally, these frameworks depend on manual configuration to acquire network proximity information to optimize the data placement and task scheduling. However, this approach is cumbersome, inflexible or even infeasible in largescale deployments, for example, across multiple datacenters. In this paper, we address this problem by utilizing the Software-Defined Networking (SDN) capability. We build Palantir, an SDN service specific for parallel/distributed computing frameworks to abstract the proximity information out of the network. Palantir frees the framework developers/ administrators from having to manually configure the network. In addition, Palantir is flexible because it allows different frameworks to define the proximity according to the framework-specific metrics. We design and implement a datacenter-aware MapReduce to demonstrate Palantir's usefullness. Our evaluation shows that, based on Palantir, datacenter-aware MapReduce achieves siginficant performance improvement.
{"title":"Palantir: Reseizing Network Proximity in Large-Scale Distributed Computing Frameworks Using SDN","authors":"Ze Yu, Min Li, Xin Yang, Xiaolin Li","doi":"10.1109/CLOUD.2014.66","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.66","url":null,"abstract":"Parallel/Distributed computing frameworks, such as MapReduce and Dryad, have been widely adopted to analyze massive data. Traditionally, these frameworks depend on manual configuration to acquire network proximity information to optimize the data placement and task scheduling. However, this approach is cumbersome, inflexible or even infeasible in largescale deployments, for example, across multiple datacenters. In this paper, we address this problem by utilizing the Software-Defined Networking (SDN) capability. We build Palantir, an SDN service specific for parallel/distributed computing frameworks to abstract the proximity information out of the network. Palantir frees the framework developers/ administrators from having to manually configure the network. In addition, Palantir is flexible because it allows different frameworks to define the proximity according to the framework-specific metrics. We design and implement a datacenter-aware MapReduce to demonstrate Palantir's usefullness. Our evaluation shows that, based on Palantir, datacenter-aware MapReduce achieves siginficant performance improvement.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132148144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parallel processing plays an important role for large-scale data analytics. It breaks a job into many small tasks which run parallel on multiple machines such as MapReduce framework. One fundamental challenge faced to such parallel processing is the straggling tasks as they can delay the completion of a job seriously. In this paper, we focus on the speculative execution issue which is used to deal with the straggling problem in the literature. We present a theoretical framework for the optimization of a single job which differs a lot from the previous heuristics-based work. More precisely, we propose two schemes when the number of parallel tasks the job consists of is smaller than cluster size. In the first scheme, no monitoring is needed and we can provide the job deadline guarantee with a high probability while achieve the optimal resource consumption level. The second scheme needs to monitor the task progress and makes the optimal number of duplicates when the straggling problem happens. On the other hand, when the number of tasks in a job is larger than the cluster size, we propose an Enhanced Speculative Execution (ESE) algorithm to make the optimal decision whenever a machine is available for a new scheduling. The simulation results show the ESE algorithm can reduce the job flow time by 50% while consume fewer resources comparing to the strategy without backup.
{"title":"Speculative Execution for a Single Job in a MapReduce-Like System","authors":"Huanle Xu, W. Lau","doi":"10.1109/CLOUD.2014.84","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.84","url":null,"abstract":"Parallel processing plays an important role for large-scale data analytics. It breaks a job into many small tasks which run parallel on multiple machines such as MapReduce framework. One fundamental challenge faced to such parallel processing is the straggling tasks as they can delay the completion of a job seriously. In this paper, we focus on the speculative execution issue which is used to deal with the straggling problem in the literature. We present a theoretical framework for the optimization of a single job which differs a lot from the previous heuristics-based work. More precisely, we propose two schemes when the number of parallel tasks the job consists of is smaller than cluster size. In the first scheme, no monitoring is needed and we can provide the job deadline guarantee with a high probability while achieve the optimal resource consumption level. The second scheme needs to monitor the task progress and makes the optimal number of duplicates when the straggling problem happens. On the other hand, when the number of tasks in a job is larger than the cluster size, we propose an Enhanced Speculative Execution (ESE) algorithm to make the optimal decision whenever a machine is available for a new scheduling. The simulation results show the ESE algorithm can reduce the job flow time by 50% while consume fewer resources comparing to the strategy without backup.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"412 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132343429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malek Musleh, Vijay S. Pai, J. Walters, A. Younge, S. Crago
This paper shows that using SRIOV for InfiniBand can enable virtualized HPC, but only if the NIC tunable parameters are set appropriately. In particular, contrary to common belief, our results show that the default policy of aggressive use of interrupt moderation can have a negative impact on the performance of InfiniBand platforms virtualized using SR-IOV. Careful tuning of interrupt moderation benefits both Native and VM platforms and helps to bridge the gap between native and virtualized performance. For some workloads, the performance gap is reduced by 15-30%.
{"title":"Bridging the Virtualization Performance Gap for HPC Using SR-IOV for InfiniBand","authors":"Malek Musleh, Vijay S. Pai, J. Walters, A. Younge, S. Crago","doi":"10.1109/CLOUD.2014.89","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.89","url":null,"abstract":"This paper shows that using SRIOV for InfiniBand can enable virtualized HPC, but only if the NIC tunable parameters are set appropriately. In particular, contrary to common belief, our results show that the default policy of aggressive use of interrupt moderation can have a negative impact on the performance of InfiniBand platforms virtualized using SR-IOV. Careful tuning of interrupt moderation benefits both Native and VM platforms and helps to bridge the gap between native and virtualized performance. For some workloads, the performance gap is reduced by 15-30%.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130166764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The computational requirements of the increasingly sophisticated algorithms used in today's robotics software applications have outpaced the onboard processors of the average robot. Furthermore, the development and configuration of these applications are difficult tasks that require expertise in diverse domains, including software engineering, control engineering, and computer vision. As a solution to these problems, this paper extends and integrates our previous works, which are based on two promising techniques: Cloud Robotics and Software Product Lines. Cloud Robotics provides a powerful and scalable environment to offload the computationally expensive algorithms resulting in low-cost processors and light-weight robots. Software Product Lines allow the end user to deploy and configure complex robotics applications without dealing with low-level problems such as configuring algorithms and designing architectures. This paper discusses the proposed method in depth, and demonstrates its advantages with a case study.
{"title":"A Software Product Line Approach for Configuring Cloud Robotics Applications","authors":"Luca Gherardi, D. Hunziker, Mohanarajah Gajamohan","doi":"10.1109/CLOUD.2014.104","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.104","url":null,"abstract":"The computational requirements of the increasingly sophisticated algorithms used in today's robotics software applications have outpaced the onboard processors of the average robot. Furthermore, the development and configuration of these applications are difficult tasks that require expertise in diverse domains, including software engineering, control engineering, and computer vision. As a solution to these problems, this paper extends and integrates our previous works, which are based on two promising techniques: Cloud Robotics and Software Product Lines. Cloud Robotics provides a powerful and scalable environment to offload the computationally expensive algorithms resulting in low-cost processors and light-weight robots. Software Product Lines allow the end user to deploy and configure complex robotics applications without dealing with low-level problems such as configuring algorithms and designing architectures. This paper discusses the proposed method in depth, and demonstrates its advantages with a case study.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114369956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a cloud market, the cloud provider provisions heterogeneous virtual machine (VM) instances from its resource pool, for allocation to cloud users. Auction-based allocations are efficient in assigning VMs to users who value them the most. Existing auction design often overlooks the heterogeneity of VMs, and does not consider dynamic, demand-driven VM provisioning. Moreover, the classic VCG auction leads to unsatisfactory seller revenues and vulnerability to a strategic bidding behavior known as shill bidding. This work presents a new type of core-selecting VM auctions, which are combinatorial auctions that always select bidder charges from the core of the price vector space, with guaranteed economic efficiency under truthful bidding. These auctions represent a comprehensive three-phase mechanism that instructs the cloud provider to judiciously assemble, allocate, and price VM bundles. They are proof against shills, can improve seller revenue over existing auction mechanisms, and can be tailored to maximize truthfulness.
{"title":"Core-Selecting Auctions for Dynamically Allocating Heterogeneous VMs in Cloud Computing","authors":"Haoming Fu, Zongpeng Li, Chuan Wu, Xiaowen Chu","doi":"10.1109/CLOUD.2014.30","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.30","url":null,"abstract":"In a cloud market, the cloud provider provisions heterogeneous virtual machine (VM) instances from its resource pool, for allocation to cloud users. Auction-based allocations are efficient in assigning VMs to users who value them the most. Existing auction design often overlooks the heterogeneity of VMs, and does not consider dynamic, demand-driven VM provisioning. Moreover, the classic VCG auction leads to unsatisfactory seller revenues and vulnerability to a strategic bidding behavior known as shill bidding. This work presents a new type of core-selecting VM auctions, which are combinatorial auctions that always select bidder charges from the core of the price vector space, with guaranteed economic efficiency under truthful bidding. These auctions represent a comprehensive three-phase mechanism that instructs the cloud provider to judiciously assemble, allocate, and price VM bundles. They are proof against shills, can improve seller revenue over existing auction mechanisms, and can be tailored to maximize truthfulness.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132177647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dariusz Król, R. Słota, J. Kitowski, L. Dutka, Jakub Liput
Using multiple Clouds as a single environment to conduct simulation-based virtual experiments at a large-scale is a challenging problem. This paper describes how this can be achieved with the Scalarm platform in the context of data farming. In particular, a use case with a private Cloud combined with public, commercial Clouds is studied. We discuss the current architecture and implementation of Scalarm in terms of supporting different infrastructures, and propose how it can be extended in order to attain a unification of different Clouds usage. We discuss different aspects of the Cloud usage unification including: scheduling virtual machines, authentication, and virtual machine state monitoring. An experimental evaluation of the presented solution is conducted with a genetic algorithm solving the well-known Travel Salesman Problem. The evaluation uses three different resource configurations: using only public Cloud, using only private Cloud, and using both public and private Clouds.
{"title":"Data Farming on Heterogeneous Clouds","authors":"Dariusz Król, R. Słota, J. Kitowski, L. Dutka, Jakub Liput","doi":"10.1109/CLOUD.2014.120","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.120","url":null,"abstract":"Using multiple Clouds as a single environment to conduct simulation-based virtual experiments at a large-scale is a challenging problem. This paper describes how this can be achieved with the Scalarm platform in the context of data farming. In particular, a use case with a private Cloud combined with public, commercial Clouds is studied. We discuss the current architecture and implementation of Scalarm in terms of supporting different infrastructures, and propose how it can be extended in order to attain a unification of different Clouds usage. We discuss different aspects of the Cloud usage unification including: scheduling virtual machines, authentication, and virtual machine state monitoring. An experimental evaluation of the presented solution is conducted with a genetic algorithm solving the well-known Travel Salesman Problem. The evaluation uses three different resource configurations: using only public Cloud, using only private Cloud, and using both public and private Clouds.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133758903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consistent hash based storage systems are used in many real world applications for which energy is one of the main cost factors. However, these systems are typically designed and deployed without any mechanisms to save energy at times of low demand. We present an energy conserving implementation of a consistent hashing based key-value store, called PowerCass, based on Apache's Cassandra. In PowerCass, nodes are divided into three groups: active, dormant, and sleepy. Nodes in the active group store cover all the data and running continuously. Dormant nodes are only powered during peak activity time and for replica synchronization. Sleepy nodes are offline almost all the time except for replica synchronization and exceptional peak loads. With this simple and elegant approach we are able to reduce the energy consumption by up to 66% compared to the unmodified key-value store Cassandra.
{"title":"PowerCass: Energy Efficient, Consistent Hashing Based Storage for Micro Clouds Based Infrastructure","authors":"Frezewd Lemma Tena, Thomas Knauth, C. Fetzer","doi":"10.1109/CLOUD.2014.17","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.17","url":null,"abstract":"Consistent hash based storage systems are used in many real world applications for which energy is one of the main cost factors. However, these systems are typically designed and deployed without any mechanisms to save energy at times of low demand. We present an energy conserving implementation of a consistent hashing based key-value store, called PowerCass, based on Apache's Cassandra. In PowerCass, nodes are divided into three groups: active, dormant, and sleepy. Nodes in the active group store cover all the data and running continuously. Dormant nodes are only powered during peak activity time and for replica synchronization. Sleepy nodes are offline almost all the time except for replica synchronization and exceptional peak loads. With this simple and elegant approach we are able to reduce the energy consumption by up to 66% compared to the unmodified key-value store Cassandra.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129062243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Tsakalozos, Vasilis Verroios, M. Roussopoulos, A. Delis
Both economic reasons and interoperation requirements necessitate the deployment of IaaS-clouds based on a share-nothing architecture. Here, live VM migration becomes a major impediment to achieving cloud-wide load balancing via selective and timely VM-migrations. Our approach is based on copying virtual disk images and keeping them synchronized during the VM migration operation. In this way, we ameliorate the limitations set by shared storage cloud designs as we place no constraints on the cloud's scalability and load-balancing capabilities. We propose a special-purpose file system, termed MigrateFS, that performs virtual disk replication within specified time-constraints while avoiding internal network congestion. Management of resource consumption during VM migration is supervised by a low-overhead and scalable distributed network of brokers. We show that our approach can reduce up to 24% the stress of already saturated physical network links during load balancing operations.
{"title":"Time-Constrained Live VM Migration in Share-Nothing IaaS-Clouds","authors":"Konstantinos Tsakalozos, Vasilis Verroios, M. Roussopoulos, A. Delis","doi":"10.1109/CLOUD.2014.18","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.18","url":null,"abstract":"Both economic reasons and interoperation requirements necessitate the deployment of IaaS-clouds based on a share-nothing architecture. Here, live VM migration becomes a major impediment to achieving cloud-wide load balancing via selective and timely VM-migrations. Our approach is based on copying virtual disk images and keeping them synchronized during the VM migration operation. In this way, we ameliorate the limitations set by shared storage cloud designs as we place no constraints on the cloud's scalability and load-balancing capabilities. We propose a special-purpose file system, termed MigrateFS, that performs virtual disk replication within specified time-constraints while avoiding internal network congestion. Management of resource consumption during VM migration is supervised by a low-overhead and scalable distributed network of brokers. We show that our approach can reduce up to 24% the stress of already saturated physical network links during load balancing operations.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127655064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}