Prevalent battery management approaches for mobile devices treated a battery's residual capacity as a given budget and tried to make the best of this budget by turning off lower-priority tasks. In contrast, the research reported in this paper aims to maximize the quantitative value of a battery's residual capacity by operating the battery according to its discharge characteristic curves (DCC), which describes a battery's discharging dynamics in terms of the correlation among its voltage level, capacity and discharging current. According to the DCC theory, it is possible to increase a battery's effective capacity in terms of Ampere hours by capping the discharging current in a certain way after its capacity falls below a threshold. This paper describes a DCC-based Battery Discharging (or DBD) technique that is capable of automatically deriving a battery's DCC, using the DCC to determine a suitable instantaneous discharge current budget, and limiting the total discharge current under that budget. Measurements on an operational prototype show that DBD is capable of extending a battery's residual capacity by more than 20% after its SOC is reduced to 30%.
{"title":"Go Gentle into the Good Night via Controlled Battery Discharging","authors":"Shih-Hao Liang, T. Chiueh, Welkin Ling","doi":"10.1145/2797022.2797035","DOIUrl":"https://doi.org/10.1145/2797022.2797035","url":null,"abstract":"Prevalent battery management approaches for mobile devices treated a battery's residual capacity as a given budget and tried to make the best of this budget by turning off lower-priority tasks. In contrast, the research reported in this paper aims to maximize the quantitative value of a battery's residual capacity by operating the battery according to its discharge characteristic curves (DCC), which describes a battery's discharging dynamics in terms of the correlation among its voltage level, capacity and discharging current. According to the DCC theory, it is possible to increase a battery's effective capacity in terms of Ampere hours by capping the discharging current in a certain way after its capacity falls below a threshold. This paper describes a DCC-based Battery Discharging (or DBD) technique that is capable of automatically deriving a battery's DCC, using the DCC to determine a suitable instantaneous discharge current budget, and limiting the total discharge current under that budget. Measurements on an operational prototype show that DBD is capable of extending a battery's residual capacity by more than 20% after its SOC is reduced to 30%.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"77A 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132723359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is well-established that high-end scalability requires fine-grained locking, and for a system like Linux, a big lock degrades performance even at moderate core counts. Nevertheless, we argue that a big lock may be fine-grained enough for a microkernel designed to run on closely-coupled cores (sharing a cache), as with the short system calls typical for a well-designed microkernel, lock contention remains low under realistic loads.
{"title":"For a Microkernel, a Big Lock Is Fine","authors":"S. Peters, A. Danis, Kevin Elphinstone, G. Heiser","doi":"10.1145/2797022.2797042","DOIUrl":"https://doi.org/10.1145/2797022.2797042","url":null,"abstract":"It is well-established that high-end scalability requires fine-grained locking, and for a system like Linux, a big lock degrades performance even at moderate core counts. Nevertheless, we argue that a big lock may be fine-grained enough for a microkernel designed to run on closely-coupled cores (sharing a cache), as with the short system calls typical for a well-designed microkernel, lock contention remains low under realistic loads.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130466475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Performance of the operating system kernel is critical to many applications running on it. Although many efforts have been spent on improving Linux kernel performance, there is not enough attention on GCC, the compiler used to build Linux. As a result, the vanilla Linux kernel is typically compiled using the same -O2 option as most user programs. This paper investigates how different configurations of GCC may affect the performance of the Linux kernel. We have compared a number of compiler variations from different aspects on the Linux kernel, including switching simple options, using different GCC versions, controlling specific optimizations, as well as performing profile-guided optimization. We present detailed analysis on the experimental results and discuss potential compiler optimizations to further improve kernel performance. As the current GCC is far from optimal for compiling the Linux kernel, a future compiler for the kernel should include specialized optimizations, while more advanced compiler optimizations should also be incorporated to improve kernel performance.
{"title":"Rethinking Compiler Optimizations for the Linux Kernel: An Explorative Study","authors":"Pengfei Yuan, Yao Guo, Xiangqun Chen","doi":"10.1145/2797022.2797030","DOIUrl":"https://doi.org/10.1145/2797022.2797030","url":null,"abstract":"Performance of the operating system kernel is critical to many applications running on it. Although many efforts have been spent on improving Linux kernel performance, there is not enough attention on GCC, the compiler used to build Linux. As a result, the vanilla Linux kernel is typically compiled using the same -O2 option as most user programs. This paper investigates how different configurations of GCC may affect the performance of the Linux kernel. We have compared a number of compiler variations from different aspects on the Linux kernel, including switching simple options, using different GCC versions, controlling specific optimizations, as well as performing profile-guided optimization. We present detailed analysis on the experimental results and discuss potential compiler optimizations to further improve kernel performance. As the current GCC is far from optimal for compiling the Linux kernel, a future compiler for the kernel should include specialized optimizations, while more advanced compiler optimizations should also be incorporated to improve kernel performance.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132292599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern file systems leverage the Copy-on-Write (COW) technique to efficiently create snapshots. COW can significantly reduce demand on disk space and I/O bandwidth by not duplicating entire files at the time of making the snapshots. However, memory space and I/O requests demanded by applications cannot benefit from this technique. In existing systems, a disk block shared by multiple files due to COW would be read from the disk multiple times. Each block in the reads is treated as an independent one in different files and is cached as a sperate block in memory. This issue is due to the fact that current file access and caching are based on logic file addresses. It poses a significant challenge on the emerging light-weight container virtualization techniques, such as Linux Container and Docker, which rely on COW to quickly spawn a large number of thin-provisioned container instances. We propose a lightweight approach to address this issue by leveraging knowledge about files produced by COW. Experimental results show that a prototyped system using the approach, named TotalCOW, can significantly remove redundant disk reads and caching without compromising efficiency of accessing COW files.
{"title":"TotalCOW: Unleash the Power of Copy-On-Write for Thin-provisioned Containers","authors":"Xingbo Wu, Wenguang Wang, Song Jiang","doi":"10.1145/2797022.2797024","DOIUrl":"https://doi.org/10.1145/2797022.2797024","url":null,"abstract":"Modern file systems leverage the Copy-on-Write (COW) technique to efficiently create snapshots. COW can significantly reduce demand on disk space and I/O bandwidth by not duplicating entire files at the time of making the snapshots. However, memory space and I/O requests demanded by applications cannot benefit from this technique. In existing systems, a disk block shared by multiple files due to COW would be read from the disk multiple times. Each block in the reads is treated as an independent one in different files and is cached as a sperate block in memory. This issue is due to the fact that current file access and caching are based on logic file addresses. It poses a significant challenge on the emerging light-weight container virtualization techniques, such as Linux Container and Docker, which rely on COW to quickly spawn a large number of thin-provisioned container instances. We propose a lightweight approach to address this issue by leveraging knowledge about files produced by COW. Experimental results show that a prototyped system using the approach, named TotalCOW, can significantly remove redundant disk reads and caching without compromising efficiency of accessing COW files.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123709419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microservices based architecture has recently gained traction among the cloud service providers in quest for a more scalable and reliable modular architecture. In parallel with this architectural choice, cloud providers are also facing the market demand for fine grained usage based prices. Both the management of the microservices complex dependencies, as well as the fine grained metering require the providers to track and log detailed monitoring data from their deployed cloud setups. Hence, on one hand, the providers need to record all such performance changes and events, while on the other hand, they are concerned with the additional cost associated with the resources required to store and process this ever increasing amount of collected data. In this paper, we analyze the design of the monitoring subsystem provided by open source cloud solutions, such as OpenStack. Specifically, we analyze how the monitoring data is collected by OpenStack and assess the characteristics of the data it collects, aiming to pinpoint the limitations of the current approach and suggest alternate solutions. Our preliminary evaluation of the proposed solutions reveals that it is possible to reduce the monitored data size by up to 80% and missed anomaly detection rate from 3% to as low as 0.05% to 0.1%.
{"title":"Anatomy of Cloud Monitoring and Metering: A case study and open problems","authors":"Ali Anwar, A. Sailer, Andrzej Kochut, A. Butt","doi":"10.1145/2797022.2797039","DOIUrl":"https://doi.org/10.1145/2797022.2797039","url":null,"abstract":"Microservices based architecture has recently gained traction among the cloud service providers in quest for a more scalable and reliable modular architecture. In parallel with this architectural choice, cloud providers are also facing the market demand for fine grained usage based prices. Both the management of the microservices complex dependencies, as well as the fine grained metering require the providers to track and log detailed monitoring data from their deployed cloud setups. Hence, on one hand, the providers need to record all such performance changes and events, while on the other hand, they are concerned with the additional cost associated with the resources required to store and process this ever increasing amount of collected data. In this paper, we analyze the design of the monitoring subsystem provided by open source cloud solutions, such as OpenStack. Specifically, we analyze how the monitoring data is collected by OpenStack and assess the characteristics of the data it collects, aiming to pinpoint the limitations of the current approach and suggest alternate solutions. Our preliminary evaluation of the proposed solutions reveals that it is possible to reduce the monitored data size by up to 80% and missed anomaly detection rate from 3% to as low as 0.05% to 0.1%.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125348242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualized systems tend to suffer from software aging, which is the phenomenon that the state of a running system degrades with time. Software aging is restored by a technique called software rejuvenation, e.g., a system reboot. To reduce the downtime due to software rejuvenation, all the virtual machines (VMs) on an aged virtualized system have to be migrated in advance. However, VM migration stresses the system and causes performance degradation. In this paper, we propose VMBeam, which enables lightweight software rejuvenation of virtualized systems using zero-copy migration. When rejuvenating an aged virtualized system, VMBeam starts a new virtualized system at the same host by using nested virtualization. Then it migrates all the VMs from the aged virtualized system to the clean one. At this time, VMBeam directly relocates the memory of the VMs on the aged virtualized system to the clean virtualized system without any copy. We have implemented VMBeam in Xen and confirmed the decreases of system loads.
{"title":"Zero-copy Migration for Lightweight Software Rejuvenation of Virtualized Systems","authors":"Kenichi Kourai, H. Ooba","doi":"10.1145/2797022.2797026","DOIUrl":"https://doi.org/10.1145/2797022.2797026","url":null,"abstract":"Virtualized systems tend to suffer from software aging, which is the phenomenon that the state of a running system degrades with time. Software aging is restored by a technique called software rejuvenation, e.g., a system reboot. To reduce the downtime due to software rejuvenation, all the virtual machines (VMs) on an aged virtualized system have to be migrated in advance. However, VM migration stresses the system and causes performance degradation. In this paper, we propose VMBeam, which enables lightweight software rejuvenation of virtualized systems using zero-copy migration. When rejuvenating an aged virtualized system, VMBeam starts a new virtualized system at the same host by using nested virtualization. Then it migrates all the VMs from the aged virtualized system to the clean one. At this time, VMBeam directly relocates the memory of the VMs on the aged virtualized system to the clean virtualized system without any copy. We have implemented VMBeam in Xen and confirmed the decreases of system loads.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127540208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jelle van den Hooff, David Lazar, James W. Mickens
Conventional wisdom suggests that rich, large-scale web applications are difficult to build and maintain. An implicit assumption behind this intuition is that a large web application requires massive numbers of servers, and complicated, one-off back-end architectures. We provide empirical evidence to disprove this intuition. We then propose new programming abstractions and a new deployment model that reduce the overhead of building and running web services.
{"title":"Mjölnir: The Magical Web Application Hammer","authors":"Jelle van den Hooff, David Lazar, James W. Mickens","doi":"10.1145/2797022.2797025","DOIUrl":"https://doi.org/10.1145/2797022.2797025","url":null,"abstract":"Conventional wisdom suggests that rich, large-scale web applications are difficult to build and maintain. An implicit assumption behind this intuition is that a large web application requires massive numbers of servers, and complicated, one-off back-end architectures. We provide empirical evidence to disprove this intuition. We then propose new programming abstractions and a new deployment model that reduce the overhead of building and running web services.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124568620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Resource under-utilization is a common problem in modern data centers. Though researchers have proposed consolidation techniques to improve utilization of computing resources, there still lacks an approach to mitigate particularly low utilization of storage capacity in clusters for online services. A potential solution is to "interplant" a distributed storage system together with the services on these clusters to leverage the unused storage. However, avoiding performance interference with existing services is an essential prerequisite for interplanting. Thus, we propose InterFS, a POSIX-compliant distributed file system aiming at fully exploiting the storage resource on data center clusters. We adopt intelligent resource isolation, peak load dodging, and region-based replica placement schemes in InterFS. Therefore, it can be interplanted with other resource-intensive services without interfering with them, and amply fulfill the storage requirements of small-scale applications in the data center. Currently InterFS is deployed in 20,000+ servers at Baidu, providing 80 PB storage space to 200+ long-tailed services.
{"title":"InterFS: An Interplanted Distributed File System to Improve Storage Utilization","authors":"Peng Wang, LeThanhMan Cao, Chunbo Lai, Leqi Zou, Guangyu Sun, J. Cong","doi":"10.1145/2797022.2797036","DOIUrl":"https://doi.org/10.1145/2797022.2797036","url":null,"abstract":"Resource under-utilization is a common problem in modern data centers. Though researchers have proposed consolidation techniques to improve utilization of computing resources, there still lacks an approach to mitigate particularly low utilization of storage capacity in clusters for online services. A potential solution is to \"interplant\" a distributed storage system together with the services on these clusters to leverage the unused storage. However, avoiding performance interference with existing services is an essential prerequisite for interplanting. Thus, we propose InterFS, a POSIX-compliant distributed file system aiming at fully exploiting the storage resource on data center clusters. We adopt intelligent resource isolation, peak load dodging, and region-based replica placement schemes in InterFS. Therefore, it can be interplanted with other resource-intensive services without interfering with them, and amply fulfill the storage requirements of small-scale applications in the data center. Currently InterFS is deployed in 20,000+ servers at Baidu, providing 80 PB storage space to 200+ long-tailed services.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116420376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Main memory is one of the most important and valuable resources in mobile devices. While resource efficiency, in general, is important in mobile computing where programs run on limited battery power and resources, managing main memory is especially critical because it has a significant impact on user experience. However, there is mounting evidence that Android systems do not utilize main memory efficiently, and actually cause page-level duplications in the physical memory. This paper takes the first step in accurately measuring the level of memory duplication and diagnosing the root cause of the problem. To this end, we develop a system called MemScope that automatically identifies and measures memory duplication levels for Android systems. It identifies which memory segment contains duplicate memory pages by analyzing the page table and the memory content. We present the design of MemScope and our preliminary evaluation. The results show that 10 to 20% of memory pages used by applications are redundant. We identify several possible causes of the problem.
{"title":"MemScope: Analyzing Memory Duplication on Android Systems","authors":"Byeoksan Lee, Seongmin Kim, Eru Park, Dongsu Han","doi":"10.1145/2797022.2797023","DOIUrl":"https://doi.org/10.1145/2797022.2797023","url":null,"abstract":"Main memory is one of the most important and valuable resources in mobile devices. While resource efficiency, in general, is important in mobile computing where programs run on limited battery power and resources, managing main memory is especially critical because it has a significant impact on user experience. However, there is mounting evidence that Android systems do not utilize main memory efficiently, and actually cause page-level duplications in the physical memory. This paper takes the first step in accurately measuring the level of memory duplication and diagnosing the root cause of the problem. To this end, we develop a system called MemScope that automatically identifies and measures memory duplication levels for Android systems. It identifies which memory segment contains duplicate memory pages by analyzing the page table and the memory content. We present the design of MemScope and our preliminary evaluation. The results show that 10 to 20% of memory pages used by applications are redundant. We identify several possible causes of the problem.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133327674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a detailed, first-of-its-kind anatomy of a commodity interactive wearable system. We asked two questions: (1) do interactive wearables deliver "close-to-metal" energy efficiency and interactive performance, and if not (2) what are the root causes preventing them from doing so? Recognizing that the usage of a wearable device is dominated by simple, short use scenarios, we profile a core set of the scenarios on two cutting-edge Android Wear devices. Following a drill down approach, we capture system behaviors at a wide spectrum of granularities, from system power and user-perceived latencies, to OS activities, to function calls happened in individual processes. To make such a profiling possible, we have extensively customized profilers, analyzers, and kernel facilities. The profiling results suggest that the current Android Wear devices are far from efficient and responsive: simply updating a displayed time keeps a device intermittently busy for 400 ms; touching to show a notification takes more than 1 second. Our results further suggest that the Android Wear OS, which inherits much of its architecture from handheld, be responsible. For example, the OS's activity and window managers often dominate CPU usage; a simple UI task, which should finish in a snap, is often scheduled to be interleaved with numerous CPU idle periods and other unrelated tasks. Our findings urge a rethink of the OS towards directly addressing wearable's unique usage.
{"title":"Anatomizing System Activities on Interactive Wearable Devices","authors":"Renju Liu, Lintong Jiang, Ningzhe Jiang, F. Lin","doi":"10.1145/2797022.2797032","DOIUrl":"https://doi.org/10.1145/2797022.2797032","url":null,"abstract":"This paper presents a detailed, first-of-its-kind anatomy of a commodity interactive wearable system. We asked two questions: (1) do interactive wearables deliver \"close-to-metal\" energy efficiency and interactive performance, and if not (2) what are the root causes preventing them from doing so? Recognizing that the usage of a wearable device is dominated by simple, short use scenarios, we profile a core set of the scenarios on two cutting-edge Android Wear devices. Following a drill down approach, we capture system behaviors at a wide spectrum of granularities, from system power and user-perceived latencies, to OS activities, to function calls happened in individual processes. To make such a profiling possible, we have extensively customized profilers, analyzers, and kernel facilities. The profiling results suggest that the current Android Wear devices are far from efficient and responsive: simply updating a displayed time keeps a device intermittently busy for 400 ms; touching to show a notification takes more than 1 second. Our results further suggest that the Android Wear OS, which inherits much of its architecture from handheld, be responsible. For example, the OS's activity and window managers often dominate CPU usage; a simple UI task, which should finish in a snap, is often scheduled to be interleaved with numerous CPU idle periods and other unrelated tasks. Our findings urge a rethink of the OS towards directly addressing wearable's unique usage.","PeriodicalId":125617,"journal":{"name":"Proceedings of the 6th Asia-Pacific Workshop on Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116374846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}