Dibyendu Das, Prakash S. Raghavendra, Arun Ramachandran
In this work we show how to build a deep neural network (DNN) to predict SPEC® scores - called the SPECnet. More than ten years have passed since the introduction of the SPEC CPU2006 suite (retired in January 2018) and thousands of submissions are available for CPU2006 integer and floating point benchmarks. We build a DNN which inputs hardware and software features from these submissions and is subsequently trained on the corresponding reported SPEC scores. We then use the trained DNN to predict scores for upcoming machine configurations. We achieve 5%-7% training and dev/test errors pointing to pretty high accuracy rates (93%-95%) for prediction. Such a prediction rate is very comparable to expected human-level accuracy of 97%-98% achieved via careful performance modelling of the core and un-core system components. In addition to the CPU2006 suite, we also apply SPECnet to SPEComp2012 and SPECjbb2015. Though the reported submissions for these benchmark suites number in hundreds only, we show that such a DNN is able to predict for these benchmarks reasonably well (~85% accuracy) too. Our SPECnet implementation uses state-of-the-art Tensorflow infrastructure and is extremely flexible and extensible.
{"title":"SPECnet: Predicting SPEC Scores using Deep Learning","authors":"Dibyendu Das, Prakash S. Raghavendra, Arun Ramachandran","doi":"10.1145/3185768.3186301","DOIUrl":"https://doi.org/10.1145/3185768.3186301","url":null,"abstract":"In this work we show how to build a deep neural network (DNN) to predict SPEC® scores - called the SPECnet. More than ten years have passed since the introduction of the SPEC CPU2006 suite (retired in January 2018) and thousands of submissions are available for CPU2006 integer and floating point benchmarks. We build a DNN which inputs hardware and software features from these submissions and is subsequently trained on the corresponding reported SPEC scores. We then use the trained DNN to predict scores for upcoming machine configurations. We achieve 5%-7% training and dev/test errors pointing to pretty high accuracy rates (93%-95%) for prediction. Such a prediction rate is very comparable to expected human-level accuracy of 97%-98% achieved via careful performance modelling of the core and un-core system components. In addition to the CPU2006 suite, we also apply SPECnet to SPEComp2012 and SPECjbb2015. Though the reported submissions for these benchmark suites number in hundreds only, we show that such a DNN is able to predict for these benchmarks reasonably well (~85% accuracy) too. Our SPECnet implementation uses state-of-the-art Tensorflow infrastructure and is extremely flexible and extensible.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89203271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Apache Tez is an application framework for large data processing using interactive queries. When a Tez developer faces the fulfillment of performance requirements s/he needs to configure and optimize the Tez application to specific execution contexts. However, these are not easy tasks, though the Apache Tez configuration will impact in the performance of the application significantly. Therefore, we propose some steps, towards the modeling and simulation of Apache Tez applications, that can help in the performance assessment of Tez designs. For the modeling, we propose a UML profile for Apache Tez. For the simulation, we propose to transform the stereotypes of the profile into stochastic Petri nets, which can be eventually used for computing performance metrics.
{"title":"Towards the Performance Analysis of Apache Tez Applications","authors":"J. I. Requeno, Iñigo Gascón, J. Merseguer","doi":"10.1145/3185768.3186284","DOIUrl":"https://doi.org/10.1145/3185768.3186284","url":null,"abstract":"Apache Tez is an application framework for large data processing using interactive queries. When a Tez developer faces the fulfillment of performance requirements s/he needs to configure and optimize the Tez application to specific execution contexts. However, these are not easy tasks, though the Apache Tez configuration will impact in the performance of the application significantly. Therefore, we propose some steps, towards the modeling and simulation of Apache Tez applications, that can help in the performance assessment of Tez designs. For the modeling, we propose a UML profile for Apache Tez. For the simulation, we propose to transform the stereotypes of the profile into stochastic Petri nets, which can be eventually used for computing performance metrics.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80439248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent trends towards microservice architectures and containers as a deployment unit raise the need for novel adaptation processes to enable elasticity for containerized microservices. Microservices facing unpredictable workloads need to react fast and match the supply as closely as possible to the demand in order to guarantee quality objectives and to keep costs at a minimum. Current state-of-the-art approaches, that react on conditions which reflect the need to scale, are either slow or lack precision in supplying the demand with the adequate capacity. Therefore, we propose a novel heuristic adaptation process which enables elasticity for a particular containerized microservice. The proposed method consists of two mechanisms that complement each other. One part reacts to changes in load intensity by scaling container instances depending on their processing capability. The other mechanism manages additional containers as a buffer to handle unpredictable workload changes. We evaluate the proposed adaptation process and discuss its effectiveness and feasibility in controlling autonomously the number of replicated containers.
{"title":"CAUS: An Elasticity Controller for a Containerized Microservice","authors":"Floriment Klinaku, Markus Frank, Steffen Becker","doi":"10.1145/3185768.3186296","DOIUrl":"https://doi.org/10.1145/3185768.3186296","url":null,"abstract":"Recent trends towards microservice architectures and containers as a deployment unit raise the need for novel adaptation processes to enable elasticity for containerized microservices. Microservices facing unpredictable workloads need to react fast and match the supply as closely as possible to the demand in order to guarantee quality objectives and to keep costs at a minimum. Current state-of-the-art approaches, that react on conditions which reflect the need to scale, are either slow or lack precision in supplying the demand with the adequate capacity. Therefore, we propose a novel heuristic adaptation process which enables elasticity for a particular containerized microservice. The proposed method consists of two mechanisms that complement each other. One part reacts to changes in load intensity by scaling container instances depending on their processing capability. The other mechanism manages additional containers as a buffer to handle unpredictable workload changes. We evaluate the proposed adaptation process and discuss its effectiveness and feasibility in controlling autonomously the number of replicated containers.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74801687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many applications in the enterprise domain require batch processing to perform business critical operations. Batch jobs perform automated, complex processing of large volumes of data without human intervention. Parallel processing allows multiple batch jobs to run concurrently to minimize the total completion time. However, this may result in one or more jobs exceeding their individual completion deadline due to resource sharing. The objective of this work is to predict the completion time of a batch job when it is running in conjunction with other batch jobs. Batch jobs may be multi-threaded and threads can have distinct CPU requirements. Our predictions are based on a simulation model using the service demand (total CPU time required) of each thread in the job. Moreover, for multi-threaded jobs, we simulate the server with instantaneous CPU utilization of each job in the small intervals instead of aggregate value while predicting the completion time. In this paper, a simulation based method is presented to predict the completion time of each batch job in a concurrent run of multiple jobs. A validation study with synthetic benchmark FIO shows that the job completion time prediction error is less than 15% in the worst case.
{"title":"PROWL: Towards Predicting the Runtime of Batch Workloads","authors":"Dheeraj Chahal, Benny Mathew","doi":"10.1145/3185768.3186407","DOIUrl":"https://doi.org/10.1145/3185768.3186407","url":null,"abstract":"Many applications in the enterprise domain require batch processing to perform business critical operations. Batch jobs perform automated, complex processing of large volumes of data without human intervention. Parallel processing allows multiple batch jobs to run concurrently to minimize the total completion time. However, this may result in one or more jobs exceeding their individual completion deadline due to resource sharing. The objective of this work is to predict the completion time of a batch job when it is running in conjunction with other batch jobs. Batch jobs may be multi-threaded and threads can have distinct CPU requirements. Our predictions are based on a simulation model using the service demand (total CPU time required) of each thread in the job. Moreover, for multi-threaded jobs, we simulate the server with instantaneous CPU utilization of each job in the small intervals instead of aggregate value while predicting the completion time. In this paper, a simulation based method is presented to predict the completion time of each batch job in a concurrent run of multiple jobs. A validation study with synthetic benchmark FIO shows that the job completion time prediction error is less than 15% in the worst case.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88948128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of scheduling small cloud functions on serverless computing platforms. Fast deployment and execution of these functions is critical, for example, for microservices architectures. However, functions that require large packages or libraries are bloated and start slowly. A solution is to cache packages at the worker nodes instead of bundling them with the functions. However, existing FaaS schedulers are vanilla load balancers, agnostic of any packages that may have been cached in response to prior function executions, and cannot reap the benefits of package caching (other than by chance). To address this problem, we propose a package-aware scheduling algorithm that tries to assign functions that require the same package to the same worker node. Our algorithm increases the hit rate of the package cache and, as a result, reduces the latency of the cloud functions. At the same time, we consider the load sustained by the workers and actively seek to avoid imbalance beyond a configurable threshold. Our preliminary evaluation shows that, even with our limited exploration of the configuration space so-far, we can achieve 66% performance improvement at the cost of a (manageable) higher node imbalance.
{"title":"Package-Aware Scheduling of FaaS Functions","authors":"Cristina L. Abad, Edwin F. Boza, Erwin Van Eyk","doi":"10.1145/3185768.3186294","DOIUrl":"https://doi.org/10.1145/3185768.3186294","url":null,"abstract":"We consider the problem of scheduling small cloud functions on serverless computing platforms. Fast deployment and execution of these functions is critical, for example, for microservices architectures. However, functions that require large packages or libraries are bloated and start slowly. A solution is to cache packages at the worker nodes instead of bundling them with the functions. However, existing FaaS schedulers are vanilla load balancers, agnostic of any packages that may have been cached in response to prior function executions, and cannot reap the benefits of package caching (other than by chance). To address this problem, we propose a package-aware scheduling algorithm that tries to assign functions that require the same package to the same worker node. Our algorithm increases the hit rate of the package cache and, as a result, reduces the latency of the cloud functions. At the same time, we consider the load sustained by the workers and actively seek to avoid imbalance beyond a configurable threshold. Our preliminary evaluation shows that, even with our limited exploration of the configuration space so-far, we can achieve 66% performance improvement at the cost of a (manageable) higher node imbalance.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"s3-49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90842895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer system simulators model the hardware and reduce the time required for the design of the hardware, by exploring the design space and thereby eliminating the time-consuming process of testing each possibility by actually building the hardware. Simulators used in Computer Architecture typically model processors, memories and disks. With the regaining of the importance of virtualization, there is a need for a simulator for virtualization-enhanced processors, especially, with security related enhancements. But adapting existing simulators to model virtualization-enabled processors is a deceptively difficult task, wrought with complications like multiple access levels. In our research, we aim to identify methods for effectively simulating virtualization-enabled processors. This paper reports the results of a preliminary simulation of Architectural Support for Memory Isolation (ASMI), a memory architecture model that provides memory isolation, using ModelSim and highlights the need for a hardware-based simulator for processors enhanced for security in virtualization.
{"title":"On the Simulation of Processors Enhanced for Security in Virtualization","authors":"Swapneel C. Mhatre, P. Chandran, J. R","doi":"10.1145/3185768.3185774","DOIUrl":"https://doi.org/10.1145/3185768.3185774","url":null,"abstract":"Computer system simulators model the hardware and reduce the time required for the design of the hardware, by exploring the design space and thereby eliminating the time-consuming process of testing each possibility by actually building the hardware. Simulators used in Computer Architecture typically model processors, memories and disks. With the regaining of the importance of virtualization, there is a need for a simulator for virtualization-enhanced processors, especially, with security related enhancements. But adapting existing simulators to model virtualization-enabled processors is a deceptively difficult task, wrought with complications like multiple access levels. In our research, we aim to identify methods for effectively simulating virtualization-enabled processors. This paper reports the results of a preliminary simulation of Architectural Support for Memory Isolation (ASMI), a memory architecture model that provides memory isolation, using ModelSim and highlights the need for a hardware-based simulator for processors enhanced for security in virtualization.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84414154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of a recently developed Hessenberg reduction algorithm greatly depends on the values chosen for its tunable parameters. The problem is hard to solve effectively with generic methods and tools. We describe a modular auto-tuning framework in which the underlying optimization algorithm is easy to substitute. The framework exposes sub-problems of standard auto-tuning type for which existing generic methods can be reused. The outputs of concurrently executing sub-tuners are assembled by the framework into a solution to the original problem. This paper presents work-in-progress.
{"title":"An Auto-Tuning Framework for a NUMA-Aware Hessenberg Reduction Algorithm","authors":"Mahmoud Eljammaly, L. Karlsson, B. Kågström","doi":"10.1145/3185768.3186304","DOIUrl":"https://doi.org/10.1145/3185768.3186304","url":null,"abstract":"The performance of a recently developed Hessenberg reduction algorithm greatly depends on the values chosen for its tunable parameters. The problem is hard to solve effectively with generic methods and tools. We describe a modular auto-tuning framework in which the underlying optimization algorithm is easy to substitute. The framework exposes sub-problems of standard auto-tuning type for which existing generic methods can be reused. The outputs of concurrently executing sub-tuners are assembled by the framework into a solution to the original problem. This paper presents work-in-progress.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89562654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
More companies are shifting focus to adding more layers of virtualization for their cloud applications thus increasing the flexibility in development, deployment and management of applications. Increase in the number of layers can result in additional overhead during autoscaling and also in coordination issues while layers may use the same resources while managed by different software. In order to capture these multilayered autoscaling performance issues, an Autoscaling Performance Measurement Tool (APMT) was developed. This tool evaluates the performance of cloud autoscaling solutions and combinations thereof for varying types of load patterns. In the paper, we highlight the architecture of the tool and its configuration. An autoscaling behavior for major IaaS providers with Kubernetes pods as the second layer of virtualization is illustrated using the data collected by APMT.
{"title":"Autoscaling Performance Measurement Tool","authors":"Anshul Jindal, Vladimir Podolskiy, M. Gerndt","doi":"10.1145/3185768.3186293","DOIUrl":"https://doi.org/10.1145/3185768.3186293","url":null,"abstract":"More companies are shifting focus to adding more layers of virtualization for their cloud applications thus increasing the flexibility in development, deployment and management of applications. Increase in the number of layers can result in additional overhead during autoscaling and also in coordination issues while layers may use the same resources while managed by different software. In order to capture these multilayered autoscaling performance issues, an Autoscaling Performance Measurement Tool (APMT) was developed. This tool evaluates the performance of cloud autoscaling solutions and combinations thereof for varying types of load patterns. In the paper, we highlight the architecture of the tool and its configuration. An autoscaling behavior for major IaaS providers with Kubernetes pods as the second layer of virtualization is illustrated using the data collected by APMT.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88129184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High Throughput Computing allows workloads of many thousands of tasks to be performed efficiently over many distributed resources and frees the user from the laborious process of managing task deployment, execution and result collection. However, in many cases the High Throughput Computing system is comprised from volunteer computational resources where tasks may be evicted by the owner of the resource. This has two main disadvantages. First, tasks may take longer to run as they may require multiple deployments before finally obtaining enough time on a resource to complete. Second, the wasted computation time will lead to wasted energy. We may be able to reduce the effect of the first disadvantage here by submitting multiple replicas of the task and take the results from the first one to complete. This, though, could lead to a significant increase in energy consumption. Thus we desire to only ever submit the minimum number of replicas required to run the task in the allocated time whilst simultaneously minimising energy. In this work we evaluate the use of fixed replica counts and Reinforcement Learning on the proportion of task which fail to finish in a given time-frame and the energy consumed by the system.
{"title":"Evaluation of Energy Consumption of Replicated Tasks in a Volunteer Computing Environment","authors":"A. McGough, M. Forshaw","doi":"10.1145/3185768.3186313","DOIUrl":"https://doi.org/10.1145/3185768.3186313","url":null,"abstract":"High Throughput Computing allows workloads of many thousands of tasks to be performed efficiently over many distributed resources and frees the user from the laborious process of managing task deployment, execution and result collection. However, in many cases the High Throughput Computing system is comprised from volunteer computational resources where tasks may be evicted by the owner of the resource. This has two main disadvantages. First, tasks may take longer to run as they may require multiple deployments before finally obtaining enough time on a resource to complete. Second, the wasted computation time will lead to wasted energy. We may be able to reduce the effect of the first disadvantage here by submitting multiple replicas of the task and take the results from the first one to complete. This, though, could lead to a significant increase in energy consumption. Thus we desire to only ever submit the minimum number of replicas required to run the task in the allocated time whilst simultaneously minimising energy. In this work we evaluate the use of fixed replica counts and Reinforcement Learning on the proportion of task which fail to finish in a given time-frame and the energy consumed by the system.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84152071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Cabrera, Clayton J. Faber, Kyle Cepeda, Robert Derber, Cooper Epstein, Jason Zheng, R. Cytron, R. Chamberlain
As the generation of data becomes more prolific, the amount of time and resources necessary to perform analyses on these data increases. What is less understood, however, is the data preprocessing steps that must be applied before any meaningful analysis can begin. This problem of taking data in some initial form and transforming it into a desired one is known as data integration. Here, we introduce the Data Integration Benchmarking Suite (DIBS), a suite of applications that are representative of data integration workloads across many disciplines. We apply a comprehensive characterization to these applications to better understand the general behavior of data integration tasks. As a result of our benchmark suite and characterization methods, we offer insight regarding data integration tasks that will guide other researchers designing solutions in this area.
{"title":"DIBS: A Data Integration Benchmark Suite","authors":"A. Cabrera, Clayton J. Faber, Kyle Cepeda, Robert Derber, Cooper Epstein, Jason Zheng, R. Cytron, R. Chamberlain","doi":"10.1145/3185768.3186307","DOIUrl":"https://doi.org/10.1145/3185768.3186307","url":null,"abstract":"As the generation of data becomes more prolific, the amount of time and resources necessary to perform analyses on these data increases. What is less understood, however, is the data preprocessing steps that must be applied before any meaningful analysis can begin. This problem of taking data in some initial form and transforming it into a desired one is known as data integration. Here, we introduce the Data Integration Benchmarking Suite (DIBS), a suite of applications that are representative of data integration workloads across many disciplines. We apply a comprehensive characterization to these applications to better understand the general behavior of data integration tasks. As a result of our benchmark suite and characterization methods, we offer insight regarding data integration tasks that will guide other researchers designing solutions in this area.","PeriodicalId":10596,"journal":{"name":"Companion of the 2018 ACM/SPEC International Conference on Performance Engineering","volume":"2010 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82595037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}