Weighted round robin load balancing is a common routing policy offered in cloud load balancers. However, there is a lack of effective mechanisms to decide the weights assigned to each server to achieve an overall optimal revenue of the system. In this paper, we first experimentally explore the relation between probabilistic routing and weighted round robin load balancing policies. From the experiment a similar behavior is found between these two policies, which makes it possible to assign the weights according to the routing probability estimated from queueing theoretic heuristic and optimization algorithms studied in the literature. We focus in particular on algorithms based on closed queueing networks for multi-class workloads, which can be used to describe application with service level agreements differentiated across users. We also compare the efficiency of queueing theoretic methods with simple heuristics that do not require to specify a stochastic model of the application. Results indicate that queueing theoretical algorithms yield significantly better results than routings proportional to the VM capacity with respect to throughput maximization.
{"title":"Evaluating Weighted Round Robin Load Balancing for Cloud Web Services","authors":"Weikun Wang, G. Casale","doi":"10.1109/SYNASC.2014.59","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.59","url":null,"abstract":"Weighted round robin load balancing is a common routing policy offered in cloud load balancers. However, there is a lack of effective mechanisms to decide the weights assigned to each server to achieve an overall optimal revenue of the system. In this paper, we first experimentally explore the relation between probabilistic routing and weighted round robin load balancing policies. From the experiment a similar behavior is found between these two policies, which makes it possible to assign the weights according to the routing probability estimated from queueing theoretic heuristic and optimization algorithms studied in the literature. We focus in particular on algorithms based on closed queueing networks for multi-class workloads, which can be used to describe application with service level agreements differentiated across users. We also compare the efficiency of queueing theoretic methods with simple heuristics that do not require to specify a stochastic model of the application. Results indicate that queueing theoretical algorithms yield significantly better results than routings proportional to the VM capacity with respect to throughput maximization.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115134690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N-dimensional discrete objects can be interpreted as cubical complexes which are suitable for the study of their homology groups in order to understand the original discrete object. The classic approach consists in computing the Normal Smith Form of some matrices associated to the cubical complex. Further approaches deal mainly with a pre-processing of the matrices in order to reduce their size. In this paper we propose a new approach, initially based on Discrete Morse Theory, which computes some homological information (Betti numbers and representative cycles) without calculating the Normal Smith Form. It works on any dimension, and it can also be applied to any kind of regular cell complex.
{"title":"Computing Homological Information Based on Directed Graphs within Discrete Objects","authors":"A. Gonzalez-Lorenzo, A. Bac, J. Mari, P. Real","doi":"10.1109/SYNASC.2014.82","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.82","url":null,"abstract":"N-dimensional discrete objects can be interpreted as cubical complexes which are suitable for the study of their homology groups in order to understand the original discrete object. The classic approach consists in computing the Normal Smith Form of some matrices associated to the cubical complex. Further approaches deal mainly with a pre-processing of the matrices in order to reduce their size. In this paper we propose a new approach, initially based on Discrete Morse Theory, which computes some homological information (Betti numbers and representative cycles) without calculating the Normal Smith Form. It works on any dimension, and it can also be applied to any kind of regular cell complex.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122267522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marc Nicodeme, C. Dossal, F. Turcu, Y. Berthoumieu
The paper deals with numerical estimations of Lipschitz bounds relating locally the reconstruction error to the measurement error in the compressive sensing framework. Most recent theoretical papers in the field parametrize such bounds relatively to certain families of vectors called dual certificates, which are fundamental to several reconstruction criteria. The paper provides two algorithms for computing dual certificates that optimize their related reconstruction error bounds. We give a greedy algorithm that provides a fast approximate solution, and a convex-projection algorithm that computes the exact optimum.
{"title":"Lipschitz Bounds for Noise Robustness in Compressive Sensing: Two Algorithms","authors":"Marc Nicodeme, C. Dossal, F. Turcu, Y. Berthoumieu","doi":"10.1109/SYNASC.2014.19","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.19","url":null,"abstract":"The paper deals with numerical estimations of Lipschitz bounds relating locally the reconstruction error to the measurement error in the compressive sensing framework. Most recent theoretical papers in the field parametrize such bounds relatively to certain families of vectors called dual certificates, which are fundamental to several reconstruction criteria. The paper provides two algorithms for computing dual certificates that optimize their related reconstruction error bounds. We give a greedy algorithm that provides a fast approximate solution, and a convex-projection algorithm that computes the exact optimum.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115878486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ciortea, O. Boissier, Antoine Zimmermann, A. Florea
Developing applications across the physical-digital space requires the homogeneous interconnection of people, physical devices, services and various data sources as first-class entities of complex socio-technical systems. In this paper, we describe socio-technical networks (STNs) as the building blocks of a semantic, open and distributed Social Web of Things. We address the problem of enabling autonomous non-human agents as participants in an open set of STNs. Our approach is to provide agents with machine-readable descriptions of STNs, of operations required for participating in such systems, and of supported implementations for those operations. Towards this aim, we present the STN ontology and we illustrate its applicability. Even though the STN ontology is a work in progress, using the core concepts and properties described in this paper we are able to create concrete specifications of STN platforms. We discuss the positioning of this ontology with respect to several well-known and related vocabularies.
{"title":"Open and Interoperable Socio-technical Networks","authors":"A. Ciortea, O. Boissier, Antoine Zimmermann, A. Florea","doi":"10.1109/SYNASC.2014.41","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.41","url":null,"abstract":"Developing applications across the physical-digital space requires the homogeneous interconnection of people, physical devices, services and various data sources as first-class entities of complex socio-technical systems. In this paper, we describe socio-technical networks (STNs) as the building blocks of a semantic, open and distributed Social Web of Things. We address the problem of enabling autonomous non-human agents as participants in an open set of STNs. Our approach is to provide agents with machine-readable descriptions of STNs, of operations required for participating in such systems, and of supported implementations for those operations. Towards this aim, we present the STN ontology and we illustrate its applicability. Even though the STN ontology is a work in progress, using the core concepts and properties described in this paper we are able to create concrete specifications of STN platforms. We discuss the positioning of this ontology with respect to several well-known and related vocabularies.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"272 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122116462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design of both fast and numerically accurate programs is a real challenge. Thus, the CGPE tool was introduced to assist programmers in synthesizing fast and numerically certified codes in fixed-point arithmetic for the particular case of polynomial evaluation. For performance purposes, this tool produces programs using exclusively unsigned arithmetic and addition/subtraction or multiplication operations, thus requiring some constraints on the fixed-point operands. These choices are well-suited when dealing with the implementation of certain mathematical functions, however they prevent from tackling a broader class of polynomial evaluation problems. In this paper, we first expose a rigorous arithmetic model for CGPE that takes into account signed arithmetic. Then, in order to make the most out of advanced instructions, we enhance this tool with a multi-criteria instruction selection module. This allows us to optimize the generated codes according to different criteria, like operation count, evaluation latency, or accuracy. Finally, we illustrate this technique on operation count, and we show that it yields an average reduction of up to 22.3% of the number of operations in the synthesized codes of some functions. We also explicit practical examples to show the impact of using accuracy based rather than latency based instruction selection.
{"title":"Automated Synthesis of Target-Dependent Programs for Polynomial Evaluation in Fixed-Point Arithmetic","authors":"C. Mouilleron, Amine Najahi, G. Revy","doi":"10.1109/SYNASC.2014.27","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.27","url":null,"abstract":"The design of both fast and numerically accurate programs is a real challenge. Thus, the CGPE tool was introduced to assist programmers in synthesizing fast and numerically certified codes in fixed-point arithmetic for the particular case of polynomial evaluation. For performance purposes, this tool produces programs using exclusively unsigned arithmetic and addition/subtraction or multiplication operations, thus requiring some constraints on the fixed-point operands. These choices are well-suited when dealing with the implementation of certain mathematical functions, however they prevent from tackling a broader class of polynomial evaluation problems. In this paper, we first expose a rigorous arithmetic model for CGPE that takes into account signed arithmetic. Then, in order to make the most out of advanced instructions, we enhance this tool with a multi-criteria instruction selection module. This allows us to optimize the generated codes according to different criteria, like operation count, evaluation latency, or accuracy. Finally, we illustrate this technique on operation count, and we show that it yields an average reduction of up to 22.3% of the number of operations in the synthesized codes of some functions. We also explicit practical examples to show the impact of using accuracy based rather than latency based instruction selection.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126254257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The new trends in distributed computing has changed the way we do computing when talking about cloud infrastructures or high-performance computing. Resource virtualization technologies enabled elasticity of resource provisioning and management through easy replication of virtual nodes or virtual machine migration. In order to provide high availability and reliability in such distributed environments where resources are managed and served in form of virtual machines, specific load balancing and fault strategies are needed. Based on fault tree analysis concepts, we propose a distributed and autonomous approach to manage faults using fault agents able to asses and predict for each virtualized node, its state of fault or future fault. Accordingly, each node can take a decision about accepting future jobs, delegate jobs to own replicated instances or start a live migration process as a second strategy for assuring availability and continuity of the service.
{"title":"Reliable Management of Virtualized Resources Using Fault Trees","authors":"A. Butoi, Alexandru-Ioan Stan, G. Silaghi","doi":"10.1109/SYNASC.2014.49","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.49","url":null,"abstract":"The new trends in distributed computing has changed the way we do computing when talking about cloud infrastructures or high-performance computing. Resource virtualization technologies enabled elasticity of resource provisioning and management through easy replication of virtual nodes or virtual machine migration. In order to provide high availability and reliability in such distributed environments where resources are managed and served in form of virtual machines, specific load balancing and fault strategies are needed. Based on fault tree analysis concepts, we propose a distributed and autonomous approach to manage faults using fault agents able to asses and predict for each virtualized node, its state of fault or future fault. Accordingly, each node can take a decision about accepting future jobs, delegate jobs to own replicated instances or start a live migration process as a second strategy for assuring availability and continuity of the service.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127112247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last few years, GPS guidance systems have become increasingly more popular. GPS-equipped devices like smart phones become more common and larger amounts of GPS data become available to geographic applications. Having precise information about the routes of a driver during a period of time can be useful to learn and estimate both the traffic and the driver's intent at specific moment of time. With our solution we want to go a step further to the existing GPS navigation systems by designing a mechanism that is capable to learn driver's routes. We could offer in the future a point-to-point concept for an environmentally friendly routing mechanism anywhere within a selected road network based on our HMM-method and a training process. Our study is based on real data collected from various local drivers and can be easily applied in modern intelligent traffic systems. The system comes with a user interface that displays the GPS routes on the map for a specific driver. These routes can be analyzed using parameters like time, distance, height and speed. Also we developed a tool that manages to compute the maximum-likelihood using the Viterbi algorithm in order to validate the next route segment election for a sampled road network.
{"title":"Mining GPS Data to Learn Driver's Route Patterns","authors":"E. Necula","doi":"10.1109/SYNASC.2014.43","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.43","url":null,"abstract":"Over the last few years, GPS guidance systems have become increasingly more popular. GPS-equipped devices like smart phones become more common and larger amounts of GPS data become available to geographic applications. Having precise information about the routes of a driver during a period of time can be useful to learn and estimate both the traffic and the driver's intent at specific moment of time. With our solution we want to go a step further to the existing GPS navigation systems by designing a mechanism that is capable to learn driver's routes. We could offer in the future a point-to-point concept for an environmentally friendly routing mechanism anywhere within a selected road network based on our HMM-method and a training process. Our study is based on real data collected from various local drivers and can be easily applied in modern intelligent traffic systems. The system comes with a user interface that displays the GPS routes on the map for a specific driver. These routes can be analyzed using parameters like time, distance, height and speed. Also we developed a tool that manages to compute the maximum-likelihood using the Viterbi algorithm in order to validate the next route segment election for a sampled road network.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126731719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose four ontologies for semantic modeling of information for Freight Transportation Exchanges and we discuss the relationships between them. They play an important role in semantic modeling of freight transportation entities and in supporting the development of agent-based semantic logistics services. This work complements our previous proposal of an agent system for brokering of freight transportation exchanges. It extends existing systems for online announcement of transportation opportunities with the provisioning of automated matchmaking services that facilitate the connection of the owners of goods with the freight transportation providers, as well as their appropriate contracting.
{"title":"Semantic Modeling of Information for Freight Transportation Broker","authors":"Lucian Luncean, C. Bǎdicǎ","doi":"10.1109/SYNASC.2014.76","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.76","url":null,"abstract":"In this paper we propose four ontologies for semantic modeling of information for Freight Transportation Exchanges and we discuss the relationships between them. They play an important role in semantic modeling of freight transportation entities and in supporting the development of agent-based semantic logistics services. This work complements our previous proposal of an agent system for brokering of freight transportation exchanges. It extends existing systems for online announcement of transportation opportunities with the provisioning of automated matchmaking services that facilitate the connection of the owners of goods with the freight transportation providers, as well as their appropriate contracting.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122411017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud Computing is emerging as a major trend in ICT industry. However, as with any new technology it raises new major challenges and one of them concerns the resource provisioning. Indeed, modern Cloud applications deal with a dynamic context and have to constantly adapt themselves in order to meet Quality of Service (QoS) requirements. This situation calls for advanced solutions designed to dynamically provide cloud resource with the aim of guaranteeing the QoS levels. This work presents a capacity allocation algorithm whose goal is to minimize the total execution cost, while satisfying some constraints on the average response time of Cloud based applications. We propose a receding horizon control technique, which can be employed to handle multiple classes of requests. We compare our solution with an oracle with perfect knowledge of the future and with a well-known heuristic described in the literature. The experimental results demonstrate that our solution outperforms the existing heuristic producing results very close to the optimal ones. Furthermore, a sensitivity analysis over two different time scales indicates that finer grained time scales are more appropriate for spiky workloads, whereas smooth traffic conditions are better handled by coarser grained time scales. Our analytical results are also validated through simulation, which shows also the impact on our solution of Cloud environment random perturbations.
{"title":"A Receding Horizon Approach for the Runtime Management of IaaS Cloud Systems","authors":"D. Ardagna, M. Ciavotta, R. Lancellotti","doi":"10.1109/SYNASC.2014.66","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.66","url":null,"abstract":"Cloud Computing is emerging as a major trend in ICT industry. However, as with any new technology it raises new major challenges and one of them concerns the resource provisioning. Indeed, modern Cloud applications deal with a dynamic context and have to constantly adapt themselves in order to meet Quality of Service (QoS) requirements. This situation calls for advanced solutions designed to dynamically provide cloud resource with the aim of guaranteeing the QoS levels. This work presents a capacity allocation algorithm whose goal is to minimize the total execution cost, while satisfying some constraints on the average response time of Cloud based applications. We propose a receding horizon control technique, which can be employed to handle multiple classes of requests. We compare our solution with an oracle with perfect knowledge of the future and with a well-known heuristic described in the literature. The experimental results demonstrate that our solution outperforms the existing heuristic producing results very close to the optimal ones. Furthermore, a sensitivity analysis over two different time scales indicates that finer grained time scales are more appropriate for spiky workloads, whereas smooth traffic conditions are better handled by coarser grained time scales. Our analytical results are also validated through simulation, which shows also the impact on our solution of Cloud environment random perturbations.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122823303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Particle-based models are widespread in the field of computer graphics and are mostly used in soft-body dynamics, for simulating surfaces such as cloth, fluids and biologic tissue. As model resolution and scenario complexity increases, the computation required for these particular applications becomes overwhelming for a single processing unit, especially when interactivity is required, thus parallelization must be employed in order to provide a fast, flexible and scalable simulation environment. High-performance computing architectures such as graphics clusters may provide the parallel computing and rendering power required, but the distributed and remote nature of the computation and rendering process introduce specific challenges that must be tackled. We propose a parallel, distributed, modular system architecture for a particle-based simulator on GPU clusters, encapsulating powerful parallel and distributed processing, distributed rendering and remote interaction techniques, for flexible, fast simulation of large models and complex scenarios. For validating and evaluating the proposed architecture, we perform a visual comparison of two largely used numeric integration methods, namely the explicit Velocity Verlet and implicit Euler integration techniques.
{"title":"A Parallel, Distributed, High-Performance Architecture for Simulating Particle-Based Models","authors":"A. Sabou, D. Gorgan","doi":"10.1109/SYNASC.2014.73","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.73","url":null,"abstract":"Particle-based models are widespread in the field of computer graphics and are mostly used in soft-body dynamics, for simulating surfaces such as cloth, fluids and biologic tissue. As model resolution and scenario complexity increases, the computation required for these particular applications becomes overwhelming for a single processing unit, especially when interactivity is required, thus parallelization must be employed in order to provide a fast, flexible and scalable simulation environment. High-performance computing architectures such as graphics clusters may provide the parallel computing and rendering power required, but the distributed and remote nature of the computation and rendering process introduce specific challenges that must be tackled. We propose a parallel, distributed, modular system architecture for a particle-based simulator on GPU clusters, encapsulating powerful parallel and distributed processing, distributed rendering and remote interaction techniques, for flexible, fast simulation of large models and complex scenarios. For validating and evaluating the proposed architecture, we perform a visual comparison of two largely used numeric integration methods, namely the explicit Velocity Verlet and implicit Euler integration techniques.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123191405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}