T. Genez, Ilia Pietri, R. Sakellariou, L. Bittencourt, E. Madeira
In this paper, we propose a procedure based on Particle Swarm Optimization (PSO) to guide the user in splitting an amount of CPU capacity (sum of frequencies) among a fixed number of resources in order to minimize the execution time (makespan) of the workflow. The proposed procedure was evaluated and compared with a naive approach, which selects only identical CPU frequency configurations for resources. Simulation results show that, by keeping the overall amount of provisioned CPU frequency constant, the proposed PSO-based approach was able to reduce the makespan of the workflow by carefully selecting different CPU frequencies for resources.
{"title":"A Particle Swarm Optimization Approach for Workflow Scheduling on Cloud Resources Priced by CPU Frequency","authors":"T. Genez, Ilia Pietri, R. Sakellariou, L. Bittencourt, E. Madeira","doi":"10.1109/UCC.2015.40","DOIUrl":"https://doi.org/10.1109/UCC.2015.40","url":null,"abstract":"In this paper, we propose a procedure based on Particle Swarm Optimization (PSO) to guide the user in splitting an amount of CPU capacity (sum of frequencies) among a fixed number of resources in order to minimize the execution time (makespan) of the workflow. The proposed procedure was evaluated and compared with a naive approach, which selects only identical CPU frequency configurations for resources. Simulation results show that, by keeping the overall amount of provisioned CPU frequency constant, the proposed PSO-based approach was able to reduce the makespan of the workflow by carefully selecting different CPU frequencies for resources.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130664866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mandal, P. Ruth, I. Baldin, Yufeng Xin, C. Castillo, G. Juve, M. Rynge, E. Deelman, J. Chase
Recent advances in cloud technologies and on-demand network circuits have created an unprecedented opportunity to enable complex data-intensive scientific applications to run on dynamic, networked cloud infrastructure. However, there is a lack of tools for supporting high-level applications like scientific workflows on dynamically provisioned, virtualized, networked IaaS (NIaaS) systems. In this paper, we propose an end-to-end system consisting of application-aware and application-independent controllers that provision and adapt complex scientific workflows on NIaaS systems. The application-independent controller enhances the utility of NIaaS systems for higher-level applications by closing the gap between application abstractions and resource provisioning constructs. We also present our approach to predicting dynamic resource requirements for workflows using an application-aware controller that proactively evaluates alternative candidate resource allotments using workflow introspection. We show how these high-level resource requirements can be automatically transformed to low-level NIaaS operations to actuate infrastructure adaptation. The results of our evaluations show that we can make fairly accurate predictions, and the interplay of prediction and adaptation can balance performance and utilization for a representative data-intensive workflow.
{"title":"Adapting Scientific Workflows on Networked Clouds Using Proactive Introspection","authors":"A. Mandal, P. Ruth, I. Baldin, Yufeng Xin, C. Castillo, G. Juve, M. Rynge, E. Deelman, J. Chase","doi":"10.1109/UCC.2015.32","DOIUrl":"https://doi.org/10.1109/UCC.2015.32","url":null,"abstract":"Recent advances in cloud technologies and on-demand network circuits have created an unprecedented opportunity to enable complex data-intensive scientific applications to run on dynamic, networked cloud infrastructure. However, there is a lack of tools for supporting high-level applications like scientific workflows on dynamically provisioned, virtualized, networked IaaS (NIaaS) systems. In this paper, we propose an end-to-end system consisting of application-aware and application-independent controllers that provision and adapt complex scientific workflows on NIaaS systems. The application-independent controller enhances the utility of NIaaS systems for higher-level applications by closing the gap between application abstractions and resource provisioning constructs. We also present our approach to predicting dynamic resource requirements for workflows using an application-aware controller that proactively evaluates alternative candidate resource allotments using workflow introspection. We show how these high-level resource requirements can be automatically transformed to low-level NIaaS operations to actuate infrastructure adaptation. The results of our evaluations show that we can make fairly accurate predictions, and the interplay of prediction and adaptation can balance performance and utilization for a representative data-intensive workflow.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"4020 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127539570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Stankovski, S. Taherizadeh, I. Taylor, Andrew C. Jones, C. Mastroianni, B. Becker, H. Suhartanto
This paper presents a design study of an environment that would provide for resilience, high-availability, reproducibility and reliability of Cloud-based applications. The approach involves the use of a resilient container overlay, which provides tools for tracking and optimizing container placement during the course of a scientific experiment execution. The system is designed to detect failure and current performance bottlenecks and be capable of migrating running containers on the fly to servers more optimal for their execution. This work is in the design phase and therefore in this paper, we outline the proposed architecture of system and identify existing container management and migration tools that can be used in the implementation, where appropriate.
{"title":"Towards an Environment Supporting Resilience, High-Availability, Reproducibility and Reliability for Cloud Applications","authors":"V. Stankovski, S. Taherizadeh, I. Taylor, Andrew C. Jones, C. Mastroianni, B. Becker, H. Suhartanto","doi":"10.1109/UCC.2015.61","DOIUrl":"https://doi.org/10.1109/UCC.2015.61","url":null,"abstract":"This paper presents a design study of an environment that would provide for resilience, high-availability, reproducibility and reliability of Cloud-based applications. The approach involves the use of a resilient container overlay, which provides tools for tracking and optimizing container placement during the course of a scientific experiment execution. The system is designed to detect failure and current performance bottlenecks and be capable of migrating running containers on the fly to servers more optimal for their execution. This work is in the design phase and therefore in this paper, we outline the proposed architecture of system and identify existing container management and migration tools that can be used in the implementation, where appropriate.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126630516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The management of the workflows on computer resources and their scheduling is the main topic of much of the research. In many cases the security involvement is equally important with the performance of the system. In normal scheduling techniques the jobs are presented in the form of DAGs and then scheduled on to the resources. In this work we pass-through the tasks from the authorization policies constraints [12], which are implemented through role-based authorization policy. This process affects the performance for application and system. This work schedules the tasks passed through authorization constraints and compares it with the normal scheduling in order to determine the effect of the authorization constraints. Further this work uses duplication technique to improve the performance by using selective resources for duplication. Experiments are conducted to verify the effectiveness of the technique.
{"title":"Scheduling Workflows under Authorization Control","authors":"Nadeem Chaudhary, Mohammad A. Alghamdi","doi":"10.1109/UCC.2015.113","DOIUrl":"https://doi.org/10.1109/UCC.2015.113","url":null,"abstract":"The management of the workflows on computer resources and their scheduling is the main topic of much of the research. In many cases the security involvement is equally important with the performance of the system. In normal scheduling techniques the jobs are presented in the form of DAGs and then scheduled on to the resources. In this work we pass-through the tasks from the authorization policies constraints [12], which are implemented through role-based authorization policy. This process affects the performance for application and system. This work schedules the tasks passed through authorization constraints and compares it with the normal scheduling in order to determine the effect of the authorization constraints. Further this work uses duplication technique to improve the performance by using selective resources for duplication. Experiments are conducted to verify the effectiveness of the technique.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114688096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Malawski, B. Baliś, Kamil Figiela, Maciej Pawlik, M. Bubak
HyperFlow is a workflow engine that enables to execute tasks of scientific workflows on the available computing resources (e.g. Virtual Machines in a cloud). PaaSage is a model-based cloud platform to provision resources, deploy the application and automatically scale them according to the application demands. One of the challenges is to design the interplay mechanism between the application-specific workflow scheduler in HyperFlow with the generic provisioning and autoscaling components of PaaSage in a generic way. Here we report on our current developments in integrating HyperFlow applications with PaaSage, outline the architecture of the proposed solution, present the CAMEL model of the application and the prototype status. The main conclusion is that thanks to the model-based approach proposed by PaaSage it is possible to modularize the workflow application and make its components easily deployable across multiple clouds.
{"title":"Support for Scientific Workflows in a Model-Based Cloud Platform","authors":"M. Malawski, B. Baliś, Kamil Figiela, Maciej Pawlik, M. Bubak","doi":"10.1109/UCC.2015.70","DOIUrl":"https://doi.org/10.1109/UCC.2015.70","url":null,"abstract":"HyperFlow is a workflow engine that enables to execute tasks of scientific workflows on the available computing resources (e.g. Virtual Machines in a cloud). PaaSage is a model-based cloud platform to provision resources, deploy the application and automatically scale them according to the application demands. One of the challenges is to design the interplay mechanism between the application-specific workflow scheduler in HyperFlow with the generic provisioning and autoscaling components of PaaSage in a generic way. Here we report on our current developments in integrating HyperFlow applications with PaaSage, outline the architecture of the proposed solution, present the CAMEL model of the application and the prototype status. The main conclusion is that thanks to the model-based approach proposed by PaaSage it is possible to modularize the workflow application and make its components easily deployable across multiple clouds.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115839474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Fatema, D. Lewis, D. O’Sullivan, J. Morrison, Abdullah-Al Mazed
The emerging new EU data protection regulation requires that regardless of the location of the data centers a cloud service provider will have to comply with the EU data protection regulation if it provides services to EU citizens. Handling personal data in a legally compliant way is a very important factor for ensuring the trustworthiness of a cloud service provider. In this paper we present a software component called Contract Validation Service (ConVS) that validates digital contracts and helps to automate contract-based access to personal data. The paper then shows how an authorisation system can use the ConVS to automate legally compliant authorisation decisions from XACML format-ted EU Data Protection Derivative rules. Such automation in determining contract-based access decisions offers the potential to significantly reduce the effort of ensuring legal compliance of the cloud service providers.
{"title":"Authorising Contract Based Access to Personal Data in the Cloud","authors":"K. Fatema, D. Lewis, D. O’Sullivan, J. Morrison, Abdullah-Al Mazed","doi":"10.1109/UCC.2015.99","DOIUrl":"https://doi.org/10.1109/UCC.2015.99","url":null,"abstract":"The emerging new EU data protection regulation requires that regardless of the location of the data centers a cloud service provider will have to comply with the EU data protection regulation if it provides services to EU citizens. Handling personal data in a legally compliant way is a very important factor for ensuring the trustworthiness of a cloud service provider. In this paper we present a software component called Contract Validation Service (ConVS) that validates digital contracts and helps to automate contract-based access to personal data. The paper then shows how an authorisation system can use the ConVS to automate legally compliant authorisation decisions from XACML format-ted EU Data Protection Derivative rules. Such automation in determining contract-based access decisions offers the potential to significantly reduce the effort of ensuring legal compliance of the cloud service providers.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115851776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
5G is the next generation of mobile network. There are many requirements on 5G, such as high capacity, low latency, flexibility, and support for any-to-any communication. Cloud technology, in the form of a distributed cloud (also known as a network embedded cloud), is an enabler technology for 5G by allowing flexible networks that meet different user application requirements. On the other hand, Machine Type Communication (MTC) is a primary application for 5G, but it can add a high volume of control signaling. To manage the expected high volume of control signaling introduced by MTC, we identified the main control events that generate signaling messages in the network. Then, we proposed a decentralized core network architecture optimized for the identified control events. The proposed control plane functions are independent in the sense that each can be executed separately. The control functions can utilize the distributed cloud to manage the enormous amount of control signaling by handling this signaling locally. Additionally, we present an analysis of the control signaling performance for each proposed control function. We conclude that it is beneficial to move session management to data centers collocated with the BS on 5G network when there is high user density.
{"title":"Distributed Cloud and De-centralized Control Plane: A Proposal for Scalable Control Plane for 5G","authors":"Amir Roozbeh","doi":"10.1109/UCC.2015.55","DOIUrl":"https://doi.org/10.1109/UCC.2015.55","url":null,"abstract":"5G is the next generation of mobile network. There are many requirements on 5G, such as high capacity, low latency, flexibility, and support for any-to-any communication. Cloud technology, in the form of a distributed cloud (also known as a network embedded cloud), is an enabler technology for 5G by allowing flexible networks that meet different user application requirements. On the other hand, Machine Type Communication (MTC) is a primary application for 5G, but it can add a high volume of control signaling. To manage the expected high volume of control signaling introduced by MTC, we identified the main control events that generate signaling messages in the network. Then, we proposed a decentralized core network architecture optimized for the identified control events. The proposed control plane functions are independent in the sense that each can be executed separately. The control functions can utilize the distributed cloud to manage the enormous amount of control signaling by handling this signaling locally. Additionally, we present an analysis of the control signaling performance for each proposed control function. We conclude that it is beneficial to move session management to data centers collocated with the BS on 5G network when there is high user density.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128770481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Perez-Palacin, R. Mirandola, Federico Monterisi, A. Montoli
We tackle the cloud providers challenge of virtual machine placement when the client experienced Quality of Service (QoS) is of paramount importance and resource demand of virtual machines varies over time. To this end, this work investigates approaches that leverage measured dynamic data for placement decisions. Relying on dynamic data to guide decisions has, on the one hand, the potential to optimize hardware utilization, while, on the other hand, increases the risk on the provided QoS. In this context, we present three probabilistic methods for evaluation of host suitability to allocate new virtual machines. We also present experiments results that illustrate the differences in the outcomes of presented approaches.
{"title":"QoS-driven Probabilistic Runtime Evaluations of Virtual Machine Placement on Hosts","authors":"Diego Perez-Palacin, R. Mirandola, Federico Monterisi, A. Montoli","doi":"10.1109/UCC.2015.24","DOIUrl":"https://doi.org/10.1109/UCC.2015.24","url":null,"abstract":"We tackle the cloud providers challenge of virtual machine placement when the client experienced Quality of Service (QoS) is of paramount importance and resource demand of virtual machines varies over time. To this end, this work investigates approaches that leverage measured dynamic data for placement decisions. Relying on dynamic data to guide decisions has, on the one hand, the potential to optimize hardware utilization, while, on the other hand, increases the risk on the provided QoS. In this context, we present three probabilistic methods for evaluation of host suitability to allocate new virtual machines. We also present experiments results that illustrate the differences in the outcomes of presented approaches.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117029813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
FELIX, the EU-Japan jointly-funded project, establishes a software defined networking (SDN) experimental facility which spans two continents and several administrative domains via dynamic transit network connections. The FELIX architectural blueprint provides an excellent example where key topics such as policy-based software-defined infrastructure instantiation is supported by resource orchestrators which manage multi-domain distributed compute and network resources including on-demand provisioning of transit network resources. In this context, FELIX implements a modern approach for authentication and authorization in SDN experimental facilities which enables fine-grained control and avoids single points of failure. This paper details the underlying mechanisms for user and transit network resource authentication and authorization in FELIX.
{"title":"Authentication and Authorization in FELIX","authors":"U. Toseef, K. Pentikousis","doi":"10.1109/UCC.2015.98","DOIUrl":"https://doi.org/10.1109/UCC.2015.98","url":null,"abstract":"FELIX, the EU-Japan jointly-funded project, establishes a software defined networking (SDN) experimental facility which spans two continents and several administrative domains via dynamic transit network connections. The FELIX architectural blueprint provides an excellent example where key topics such as policy-based software-defined infrastructure instantiation is supported by resource orchestrators which manage multi-domain distributed compute and network resources including on-demand provisioning of transit network resources. In this context, FELIX implements a modern approach for authentication and authorization in SDN experimental facilities which enables fine-grained control and avoids single points of failure. This paper details the underlying mechanisms for user and transit network resource authentication and authorization in FELIX.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116218349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Corentin Dupont, Mehdi Sheikhalishahi, F. Facca, Fabien Hermenier
Sustainable energy sources such as renewable energies are replacing dirty sources of energy in order to address the environmental challenges of the century. In order to operate data centres with renewable energies we have to mitigate their volatile and variable nature. In this paper, we present the Energy Adaptive Software Controller (EASC), a generic software controller and interface that developers can use to make their application adaptive to renewable energy availability. Adaptivity is realized through the concept of working modes which allow to run an application under various performance levels. We advocate for a collaborative approach involving the developers of the applications in order to use the renewable energies more efficiently. The notion of EASC allows to abstract away the details of application scheduling, execution, and monitoring. We demonstrate the applicability and genericity of the EASC concept through four different instantiations. These instantiations cover two types of applications: task-oriented and service-oriented, and two kind of computing environments: Infrastructure-as-a-Service, and Platform-as-a-Service. The EASC has been trialled in the data centre of the healthcare agency of Trento, Italy and in the laboratory of HP Milan, Italy, with a mix of energy sources: national grid and local solar panels. The experimental results show how the EASC allowed to increase the renewable energies usage of 14% and 4.73% for Trento and HP Labs trials, respectively.
{"title":"An Energy Aware Application Controller for Optimizing Renewable Energy Consumption in Data Centres","authors":"Corentin Dupont, Mehdi Sheikhalishahi, F. Facca, Fabien Hermenier","doi":"10.1109/UCC.2015.36","DOIUrl":"https://doi.org/10.1109/UCC.2015.36","url":null,"abstract":"Sustainable energy sources such as renewable energies are replacing dirty sources of energy in order to address the environmental challenges of the century. In order to operate data centres with renewable energies we have to mitigate their volatile and variable nature. In this paper, we present the Energy Adaptive Software Controller (EASC), a generic software controller and interface that developers can use to make their application adaptive to renewable energy availability. Adaptivity is realized through the concept of working modes which allow to run an application under various performance levels. We advocate for a collaborative approach involving the developers of the applications in order to use the renewable energies more efficiently. The notion of EASC allows to abstract away the details of application scheduling, execution, and monitoring. We demonstrate the applicability and genericity of the EASC concept through four different instantiations. These instantiations cover two types of applications: task-oriented and service-oriented, and two kind of computing environments: Infrastructure-as-a-Service, and Platform-as-a-Service. The EASC has been trialled in the data centre of the healthcare agency of Trento, Italy and in the laboratory of HP Milan, Italy, with a mix of energy sources: national grid and local solar panels. The experimental results show how the EASC allowed to increase the renewable energies usage of 14% and 4.73% for Trento and HP Labs trials, respectively.","PeriodicalId":381279,"journal":{"name":"2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126244664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}