Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366680
L. Prada, José Daniel García Sánchez, J. Carretero, Félix García
This paper considers the question of saving energy in the disk drive making advantage of diverse devices in a hybrid storage system employing flash and disk drives. The flash and disk offer different power characteristics, being flash much less power consuming than the disk drive. We propose a technique that uses a flash device as a cache for a single disk device. We examine various options for managing the flash and disk devices in such a hybrid system and show that the proposed method saves energy in diverse scenarios. We implemented a simulator composed of disk and flash devices. This paper gives an overview of the design and evaluation of the proposed approach with the help of realistic workloads.
{"title":"Saving power in flash and disk hybrid storage system","authors":"L. Prada, José Daniel García Sánchez, J. Carretero, Félix García","doi":"10.1109/MASCOT.2009.5366680","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366680","url":null,"abstract":"This paper considers the question of saving energy in the disk drive making advantage of diverse devices in a hybrid storage system employing flash and disk drives. The flash and disk offer different power characteristics, being flash much less power consuming than the disk drive. We propose a technique that uses a flash device as a cache for a single disk device. We examine various options for managing the flash and disk devices in such a hybrid system and show that the proposed method saves energy in diverse scenarios. We implemented a simulator composed of disk and flash devices. This paper gives an overview of the design and evaluation of the proposed approach with the help of realistic workloads.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117177117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5367075
Davide Cerotti, M. Gribaudo, A. Bobbio
The paper discusses a Dynamic Markovian Agent Model obtained by adding mobility to a recently introduced new formalism suitable for the analysis of large scale systems, composed by a population of interacting entities, called Markovian Agents (MA). The differential equations describing the evolution of the MA density in time and space are derived, and their numerical solution is briefly sketched. An application to the analysis of the flow of vehicles in a road tunnel is discussed, together with the evaluation of the probability of collision against a fixed obstacle.
{"title":"Presenting Dynamic Markovian Agents with a road tunnel application","authors":"Davide Cerotti, M. Gribaudo, A. Bobbio","doi":"10.1109/MASCOT.2009.5367075","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5367075","url":null,"abstract":"The paper discusses a Dynamic Markovian Agent Model obtained by adding mobility to a recently introduced new formalism suitable for the analysis of large scale systems, composed by a population of interacting entities, called Markovian Agents (MA). The differential equations describing the evolution of the MA density in time and space are derived, and their numerical solution is briefly sketched. An application to the analysis of the flow of vehicles in a road tunnel is discussed, together with the evaluation of the probability of collision against a fixed obstacle.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127019824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366777
Hongxia Sun, C. Williamson
In multi-rate cellular transmission systems, users with different Quality of Service (QoS) requirements share the same wireless channel. In this paper, we investigate the problem of efficient resource allocation for scheduling with differentiated QoS support in a multi-rate system. We propose Dynamic Global Proportional Fairness (DGPF) scheduling on the downlink. We investigate the performance of the scheduling algorithm and model the proposed scheme in a High Speed Downlink Packet Access (HSDPA) simulation environment. The simulation results show that our approach can achieve suitable QoS for different classes of users without compromising aggregate network throughput. The results also show that TCP dynamics affect overall system performance.
{"title":"Service differentiation in multi-rate HSDPA systems","authors":"Hongxia Sun, C. Williamson","doi":"10.1109/MASCOT.2009.5366777","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366777","url":null,"abstract":"In multi-rate cellular transmission systems, users with different Quality of Service (QoS) requirements share the same wireless channel. In this paper, we investigate the problem of efficient resource allocation for scheduling with differentiated QoS support in a multi-rate system. We propose Dynamic Global Proportional Fairness (DGPF) scheduling on the downlink. We investigate the performance of the scheduling algorithm and model the proposed scheme in a High Speed Downlink Packet Access (HSDPA) simulation environment. The simulation results show that our approach can achieve suitable QoS for different classes of users without compromising aggregate network throughput. The results also show that TCP dynamics affect overall system performance.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129320554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366785
Andrzej Kochut
Desktop virtualization is a new delivery method in which desktop operating systems execute in a data center and users access their applications using stateless “thin-client” devices. This paradigm promises significant benefits in terms of data security, flexibility, and reduction of the total cost of ownership. However, in order to further optimize this approach while maintaining good user experience, efficient resource management algorithms are required. This paper formulates an analytical model allowing for detailed investigation of how power consumption of virtualized server farm depends on properties of workload, adaptiveness of virtualization infrastructure, and average density of virtual machines per physical server. Assumptions needed to develop the model are confirmed using statistical analysis of desktop workload traces and the model itself is also validated using simulations. We apply the model to compare power consumption of static and dynamic virtual machine allocation strategies. The results of the study can be used to develop online virtual machine migration algorithms. Even though this paper focuses on virtualized systems running desktop workloads, the modeling approach is general and can be applied in other contexts.
{"title":"Power and performance modeling of virtualized desktop systems","authors":"Andrzej Kochut","doi":"10.1109/MASCOT.2009.5366785","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366785","url":null,"abstract":"Desktop virtualization is a new delivery method in which desktop operating systems execute in a data center and users access their applications using stateless “thin-client” devices. This paradigm promises significant benefits in terms of data security, flexibility, and reduction of the total cost of ownership. However, in order to further optimize this approach while maintaining good user experience, efficient resource management algorithms are required. This paper formulates an analytical model allowing for detailed investigation of how power consumption of virtualized server farm depends on properties of workload, adaptiveness of virtualization infrastructure, and average density of virtual machines per physical server. Assumptions needed to develop the model are confirmed using statistical analysis of desktop workload traces and the model itself is also validated using simulations. We apply the model to compare power consumption of static and dynamic virtual machine allocation strategies. The results of the study can be used to develop online virtual machine migration algorithms. Even though this paper focuses on virtualized systems running desktop workloads, the modeling approach is general and can be applied in other contexts.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130908683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366697
G. Khan, V. Dumitriu
The conception of Network-on-Chip (NoC) presents system designers with a new approach to the design of on-chip interconnection structures. However, such networks present designers with a large array of design parameters and decisions, many of which are critical to the efficient operation of NoC systems. To aid the design process of complex systems-on-chip, this paper presents a NoC simulation environment that has been developed and implemented using SystemC, a transaction-level modeling language. The simulation environment consists of on-chip components as well as traffic generators, which can generate various types of traffic patterns. A set of simulation results demonstrates the types of parameters that can affect performance of on-chip systems, including topology, network latency and achievable throughput. The results also verify the modeling capabilities of the proposed environment.
{"title":"Simulation environment for design and verification of Network-on-Chip and multi-core systems","authors":"G. Khan, V. Dumitriu","doi":"10.1109/MASCOT.2009.5366697","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366697","url":null,"abstract":"The conception of Network-on-Chip (NoC) presents system designers with a new approach to the design of on-chip interconnection structures. However, such networks present designers with a large array of design parameters and decisions, many of which are critical to the efficient operation of NoC systems. To aid the design process of complex systems-on-chip, this paper presents a NoC simulation environment that has been developed and implemented using SystemC, a transaction-level modeling language. The simulation environment consists of on-chip components as well as traffic generators, which can generate various types of traffic patterns. A set of simulation results demonstrates the types of parameters that can affect performance of on-chip systems, including topology, network latency and achievable throughput. The results also verify the modeling capabilities of the proposed environment.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125107056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5363142
Taniya Siddiqua, S. Gurumurthi
Silicon reliability is a key challenge facing the microprocessor industry. Processors need to be designed such that they are resilient against both soft errors and lifetime reliability phenomena. However, techniques developed to address one class of reliability problems may impact other aspects of silicon reliability. In this paper, we show that Redundant Multi-Threading (RMT), which provides soft error protection, exacerbates lifetime reliability. We then explore two different architectural approaches to tackle this problem, namely, Dynamic Voltage Scaling (DVS) and partial RMT. We show that each approach has certain strengths and weaknesses with respect to performance, soft error coverage, and lifetime reliability. We then propose and evaluate a hybrid approach that combines DVS and partial RMT. We show that this approach provides better improvement in lifetime reliability than DVS or partial RMT alone, buys back a significant amount of performance that is lost due to DVS, and provides nearly complete soft error coverage.
{"title":"Balancing soft error coverage with lifetime reliability in redundantly multithreaded processors","authors":"Taniya Siddiqua, S. Gurumurthi","doi":"10.1109/MASCOT.2009.5363142","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5363142","url":null,"abstract":"Silicon reliability is a key challenge facing the microprocessor industry. Processors need to be designed such that they are resilient against both soft errors and lifetime reliability phenomena. However, techniques developed to address one class of reliability problems may impact other aspects of silicon reliability. In this paper, we show that Redundant Multi-Threading (RMT), which provides soft error protection, exacerbates lifetime reliability. We then explore two different architectural approaches to tackle this problem, namely, Dynamic Voltage Scaling (DVS) and partial RMT. We show that each approach has certain strengths and weaknesses with respect to performance, soft error coverage, and lifetime reliability. We then propose and evaluate a hybrid approach that combines DVS and partial RMT. We show that this approach provides better improvement in lifetime reliability than DVS or partial RMT alone, buys back a significant amount of performance that is lost due to DVS, and provides nearly complete soft error coverage.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127230365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366710
G. Kunz, O. Landsiedel, Klaus Wehrle
Network simulation faces an increasing demand for highly detailed simulation models which in turn require efficient handling of their inherent computational complexity. This demand for detailed models includes both accurate estimations of processing time and in-depth modeling of wireless technologies. For instance, one might want to investigate if a particular device can incorporate a computationally complex radio transmission technology while meeting the deadlines of a multi-media streaming application such as VoIP.
{"title":"Horizon — Exploiting timing information for parallel network simulation","authors":"G. Kunz, O. Landsiedel, Klaus Wehrle","doi":"10.1109/MASCOT.2009.5366710","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366710","url":null,"abstract":"Network simulation faces an increasing demand for highly detailed simulation models which in turn require efficient handling of their inherent computational complexity. This demand for detailed models includes both accurate estimations of processing time and in-depth modeling of wireless technologies. For instance, one might want to investigate if a particular device can incorporate a computationally complex radio transmission technology while meeting the deadlines of a multi-media streaming application such as VoIP.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127392637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366631
Ashish Raniwala, Pradipta De, Srikant Sharma, Rupa Krishnan, T. Chiueh
Network flows running on a wireless mesh network (WMN) may suffer from partial failures in the form of serious throughput degradation, sometimes to the extent of starvation, because of weaknesses in the underlying MAC protocol, dissimilar physical transmission rates or different degrees of local congestion. Most existing WMN transport protocols fail to take these factors into account. This paper describes the design, implementation and evaluation of a coordinated congestion control (C3L) algorithm that guarantees fair resource allocation under adverse scenarios and thus provides end-to-end max-min fairness among competing flows. The C3L algorithm features an advanced topology discovery mechanism that detects the inhibition of wireless communication links, and a general collision domain capacity re-estimation mechanism that effectively addresses such inhibition. A comprehensive ns-2-based simulation study as well as empirical measurements taken from an IEEE 802.11a-based multi-hop wireless testbed demonstrate that the C3L algorithm greatly improves inter-flow fairness, eliminates the starvation problem, and at the same time maintains high radio resource utilization efficiency.
{"title":"Globally fair radio resource allocation for wireless mesh networks","authors":"Ashish Raniwala, Pradipta De, Srikant Sharma, Rupa Krishnan, T. Chiueh","doi":"10.1109/MASCOT.2009.5366631","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366631","url":null,"abstract":"Network flows running on a wireless mesh network (WMN) may suffer from partial failures in the form of serious throughput degradation, sometimes to the extent of starvation, because of weaknesses in the underlying MAC protocol, dissimilar physical transmission rates or different degrees of local congestion. Most existing WMN transport protocols fail to take these factors into account. This paper describes the design, implementation and evaluation of a coordinated congestion control (C3L) algorithm that guarantees fair resource allocation under adverse scenarios and thus provides end-to-end max-min fairness among competing flows. The C3L algorithm features an advanced topology discovery mechanism that detects the inhibition of wireless communication links, and a general collision domain capacity re-estimation mechanism that effectively addresses such inhibition. A comprehensive ns-2-based simulation study as well as empirical measurements taken from an IEEE 802.11a-based multi-hop wireless testbed demonstrate that the C3L algorithm greatly improves inter-flow fairness, eliminates the starvation problem, and at the same time maintains high radio resource utilization efficiency.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127818279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366721
Alessandro Moro, E. Mumolo, M. Nolich
In this paper, we present a novel approach for accurate modeling of computer workloads. According to this approach, the sequences of features generated by a program during its execution are considered as time series and are processed with signal processing techniques both for feature extraction and statistical pattern matching. In the feature extraction phase we used spectral analysis for describing the sequence and to retain the important information. In the pattern matching phase we used a simplified form of bidimensional Hidden Markov Model, called pseudo2D-HMM, as Statistical Machine Learning Algorithm. Several processes of the same workload are necessary to obtain a 2D-HMM model of the workload. In this way, the models are obtained in an initial training phase; we developed techniques for on-line workload classification of a running process and for synthetic traces generation. The proposed algorithms is evaluated via trace-driven simulations using the SPEC 2000 workloads. We show that pseudo2D-HMMs accurately describe memory references sequences; the classification accuracy is about 92% with six different workloads.
{"title":"Workload modeling using pseudo2D-HMM","authors":"Alessandro Moro, E. Mumolo, M. Nolich","doi":"10.1109/MASCOT.2009.5366721","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366721","url":null,"abstract":"In this paper, we present a novel approach for accurate modeling of computer workloads. According to this approach, the sequences of features generated by a program during its execution are considered as time series and are processed with signal processing techniques both for feature extraction and statistical pattern matching. In the feature extraction phase we used spectral analysis for describing the sequence and to retain the important information. In the pattern matching phase we used a simplified form of bidimensional Hidden Markov Model, called pseudo2D-HMM, as Statistical Machine Learning Algorithm. Several processes of the same workload are necessary to obtain a 2D-HMM model of the workload. In this way, the models are obtained in an initial training phase; we developed techniques for on-line workload classification of a running process and for synthetic traces generation. The proposed algorithms is evaluated via trace-driven simulations using the SPEC 2000 workloads. We show that pseudo2D-HMMs accurately describe memory references sequences; the classification accuracy is about 92% with six different workloads.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124257353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-28DOI: 10.1109/MASCOT.2009.5366689
Lianhua Li, G. Franks
Fair-share scheduling attempts to grant access to a resource based on the amount of ¿share¿ that a task possesses. It is widely used in places such as Internet routing, and recently, in the Linux kernel. Software performance engineering is concerned with creating responsive applications and often uses modeling to predict the behaviour of a system before the system is built. This work extends the Layered Queueing Network (LQN) performance model used to model distributed software systems by including hierarchical fair-share scheduling with both guarantees and caps. To exercise the model, the Completely Fair Scheduler, found in recent Linux kernels, is incorporated into PARASOL, the underlying simulation engine of the LQN simulator, lqsim. This simulator is then used to study the effects of fair-share scheduling on a multi-tier implementation of a building security system. The results here show that fair-share scheduling with guarantees is not sufficient when an application is layered into multiple tiers because of contention at lower layers in the system. Fair-share scheduling with caps must be used instead.
{"title":"Performance modeling of systems using fair share scheduling with Layered Queueing Networks","authors":"Lianhua Li, G. Franks","doi":"10.1109/MASCOT.2009.5366689","DOIUrl":"https://doi.org/10.1109/MASCOT.2009.5366689","url":null,"abstract":"Fair-share scheduling attempts to grant access to a resource based on the amount of ¿share¿ that a task possesses. It is widely used in places such as Internet routing, and recently, in the Linux kernel. Software performance engineering is concerned with creating responsive applications and often uses modeling to predict the behaviour of a system before the system is built. This work extends the Layered Queueing Network (LQN) performance model used to model distributed software systems by including hierarchical fair-share scheduling with both guarantees and caps. To exercise the model, the Completely Fair Scheduler, found in recent Linux kernels, is incorporated into PARASOL, the underlying simulation engine of the LQN simulator, lqsim. This simulator is then used to study the effects of fair-share scheduling on a multi-tier implementation of a building security system. The results here show that fair-share scheduling with guarantees is not sufficient when an application is layered into multiple tiers because of contention at lower layers in the system. Fair-share scheduling with caps must be used instead.","PeriodicalId":275737,"journal":{"name":"2009 IEEE International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133693401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}