Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.136
J. S. V. D. Veen, E. Lazovik, M. Makkes, R. Meijer
Cloud computing enables on-demand access to a shared pool of IT resources. In the case of Infrastructure as a Service (IaaS), the cloud user typically acquires Virtual Machines (VMs) from the provider. It is up to the user to decide at what time and for how long they want to use these VMs. Because of the pay-per-use nature of most clouds, there is a strong incentive to use as few resources as possible and release them quickly when they are no longer needed. Every step of the deployment process, i.e., acquiring VMs, creating network links, and installing, configuring and starting software components on them, should therefore be as fast as possible. The amount of time the deployment process takes can be influenced by the user by performing some steps in parallel or using timing knowledge of previous deployments. This paper presents four different strategies for deploying applications on cloud computing infrastructures. Performance measurements of application deployments on three public IaaS clouds are used to show the speed differences between these strategies.
{"title":"Deployment Strategies for Distributed Applications on Cloud Computing Infrastructures","authors":"J. S. V. D. Veen, E. Lazovik, M. Makkes, R. Meijer","doi":"10.1109/CloudCom.2013.136","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.136","url":null,"abstract":"Cloud computing enables on-demand access to a shared pool of IT resources. In the case of Infrastructure as a Service (IaaS), the cloud user typically acquires Virtual Machines (VMs) from the provider. It is up to the user to decide at what time and for how long they want to use these VMs. Because of the pay-per-use nature of most clouds, there is a strong incentive to use as few resources as possible and release them quickly when they are no longer needed. Every step of the deployment process, i.e., acquiring VMs, creating network links, and installing, configuring and starting software components on them, should therefore be as fast as possible. The amount of time the deployment process takes can be influenced by the user by performing some steps in parallel or using timing knowledge of previous deployments. This paper presents four different strategies for deploying applications on cloud computing infrastructures. Performance measurements of application deployments on three public IaaS clouds are used to show the speed differences between these strategies.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124299752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.10
Feng Lu, Hao Pan, Xiao Lei, Xiaofei Liao, Hai Jin
IP Multimedia Subsystem (IMS) has been accepted as the core control platform by 3GPP. It has been recognized as the vision beyond GSM for the Next Generation Network (NGN). The IMS framework delivers IP multimedia to mobile users through Session Initiation Protocol (SIP) and supports heterogeneous networks access. In this paper, we propose a virtualization-based cloud platform for the IMS core network, with a novel load-balance and disaster recovery policy. Experimental results indicate that the proposed mechanism improves system performance by dynamic allocating resources according to current load. The proposed cloud infrastructure is able to recover from a disaster in seconds by using live migration of virtual machines.
{"title":"A Virtualization-Based Cloud Infrastructure for IMS Core Network","authors":"Feng Lu, Hao Pan, Xiao Lei, Xiaofei Liao, Hai Jin","doi":"10.1109/CloudCom.2013.10","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.10","url":null,"abstract":"IP Multimedia Subsystem (IMS) has been accepted as the core control platform by 3GPP. It has been recognized as the vision beyond GSM for the Next Generation Network (NGN). The IMS framework delivers IP multimedia to mobile users through Session Initiation Protocol (SIP) and supports heterogeneous networks access. In this paper, we propose a virtualization-based cloud platform for the IMS core network, with a novel load-balance and disaster recovery policy. Experimental results indicate that the proposed mechanism improves system performance by dynamic allocating resources according to current load. The proposed cloud infrastructure is able to recover from a disaster in seconds by using live migration of virtual machines.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124381814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.129
Antorweep Chakravorty, T. Wlodarczyk, Chunming Rong
The vast amounts of data generated from sensors in smart homes, can give valuable insights about social and behavioral patters on households and their residents. The goal of the project is investigation & implementation of mechanisms to capture/store vast continuous streams of time-series data from optical movement sensors, analyze & mine for anomalies/changes enabling preventive care with mechanisms for presentation/visualization of meaningful information to target user groups (next of kin, care providers, professional services), while ensuring that the privacy of participants are preserved.
{"title":"Safer@Home Analytics: A Big Data Analytical Solution for Smart Homes","authors":"Antorweep Chakravorty, T. Wlodarczyk, Chunming Rong","doi":"10.1109/CloudCom.2013.129","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.129","url":null,"abstract":"The vast amounts of data generated from sensors in smart homes, can give valuable insights about social and behavioral patters on households and their residents. The goal of the project is investigation & implementation of mechanisms to capture/store vast continuous streams of time-series data from optical movement sensors, analyze & mine for anomalies/changes enabling preventive care with mechanisms for presentation/visualization of meaningful information to target user groups (next of kin, care providers, professional services), while ensuring that the privacy of participants are preserved.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123737464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.53
David Nuñez, Carmen Fernández-Gago, Siani Pearson, M. Felici
Cloud governance, and in particular data governance in the cloud, relies on different technical and organizational practices and procedures, such as policy enforcement, risk management, incident management and remediation. The concept of accountability encompasses such practices, and is essential for enhancing security and trustworthiness in the cloud. Besides this, proper measurement of cloud services, both at a technical and governance level, is a distinctive aspect of the cloud computing model. Hence, a natural problem that arises is how to measure the impact on accountability of the procedures held in practice by organizations that participate in the cloud ecosystem. In this paper, we describe a metamodel for addressing the problem of measuring accountability properties for cloud computing, as discussed and defined by the Cloud Accountability Project (A4Cloud). The goal of this metamodel is to act as a language for describing: (i) accountability properties in terms of actions between entities, and (ii) metrics for measuring the fulfillment of such properties. It also allows the recursive decomposition of properties and metrics, from a high-level and abstract world to a tangible and measurable one. Finally, we illustrate our proposal of the metamodel by modelling the transparency property, and define some metrics for it.
{"title":"A Metamodel for Measuring Accountability Attributes in the Cloud","authors":"David Nuñez, Carmen Fernández-Gago, Siani Pearson, M. Felici","doi":"10.1109/CloudCom.2013.53","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.53","url":null,"abstract":"Cloud governance, and in particular data governance in the cloud, relies on different technical and organizational practices and procedures, such as policy enforcement, risk management, incident management and remediation. The concept of accountability encompasses such practices, and is essential for enhancing security and trustworthiness in the cloud. Besides this, proper measurement of cloud services, both at a technical and governance level, is a distinctive aspect of the cloud computing model. Hence, a natural problem that arises is how to measure the impact on accountability of the procedures held in practice by organizations that participate in the cloud ecosystem. In this paper, we describe a metamodel for addressing the problem of measuring accountability properties for cloud computing, as discussed and defined by the Cloud Accountability Project (A4Cloud). The goal of this metamodel is to act as a language for describing: (i) accountability properties in terms of actions between entities, and (ii) metrics for measuring the fulfillment of such properties. It also allows the recursive decomposition of properties and metrics, from a high-level and abstract world to a tangible and measurable one. Finally, we illustrate our proposal of the metamodel by modelling the transparency property, and define some metrics for it.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121971291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.119
Matthieu Imbert, L. Pouilloux, Jonathan Rouzaud-Cornabas, A. Lèbre, Takahiro Hirofuchi
This paper describes EXECO, a library that provides easy and efficient control of local or remote, standalone or parallel, processes execution, as well as tools designed for scripting distributed computing experiments on any computing platform. After discussing the EXECO internals, we illustrate its interest by presenting two experiments dealing with virtualization technologies on the Grid'5000 testbed.
{"title":"Using the EXECO Toolkit to Perform Automatic and Reproducible Cloud Experiments","authors":"Matthieu Imbert, L. Pouilloux, Jonathan Rouzaud-Cornabas, A. Lèbre, Takahiro Hirofuchi","doi":"10.1109/CloudCom.2013.119","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.119","url":null,"abstract":"This paper describes EXECO, a library that provides easy and efficient control of local or remote, standalone or parallel, processes execution, as well as tools designed for scripting distributed computing experiments on any computing platform. After discussing the EXECO internals, we illustrate its interest by presenting two experiments dealing with virtualization technologies on the Grid'5000 testbed.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122085762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.154
Shintaro Yamamoto, S. Matsumoto, S. Saiki, Masahide Nakamura
Smart city provides various value-added services by collecting large-scale data from houses and infrastructures within a city. To use such large-scale raw data, individual applications usually take expensive computation effort and large processing time. To reduce the effort and time, we propose Materialized View as a Service (MVaaS). Using the MVaaS, each application can easily and dynamically construct its own materialized view, in which the raw data is cached in an appropriate format for the application. Once the view is constructed, the application can quickly access necessary data. In this paper, we design a framework of MVaaS specifically for large-scale house log, managed in our smart-city data platform Scallop4SC. In the framework, each application first specifies how the raw data should be filtered, grouped and aggregated. For a given data specification, MVaaS dynamically constructs a MapReduce batch program that converts the raw data into a desired view. The batch is then executed on Hadoop, and the resultant view is stored in HBase. We conduct an experimental evaluation to compare the response time between cases with and without the proposed MVaaS.
{"title":"Materialized View as a Service for Large-Scale House Log in Smart City","authors":"Shintaro Yamamoto, S. Matsumoto, S. Saiki, Masahide Nakamura","doi":"10.1109/CloudCom.2013.154","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.154","url":null,"abstract":"Smart city provides various value-added services by collecting large-scale data from houses and infrastructures within a city. To use such large-scale raw data, individual applications usually take expensive computation effort and large processing time. To reduce the effort and time, we propose Materialized View as a Service (MVaaS). Using the MVaaS, each application can easily and dynamically construct its own materialized view, in which the raw data is cached in an appropriate format for the application. Once the view is constructed, the application can quickly access necessary data. In this paper, we design a framework of MVaaS specifically for large-scale house log, managed in our smart-city data platform Scallop4SC. In the framework, each application first specifies how the raw data should be filtered, grouped and aggregated. For a given data specification, MVaaS dynamically constructs a MapReduce batch program that converts the raw data into a desired view. The batch is then executed on Hadoop, and the resultant view is stored in HBase. We conduct an experimental evaluation to compare the response time between cases with and without the proposed MVaaS.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122120092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.150
Artur Carvalho Zucchi, N. Gonzalez, Marcelo Risse de Andrade, Rosangela de Fatima Pereira, Walter Akio Goya, K. Langona, T. Carvalho, Jan-Erik Mångs
This paper presents the idea of applying distributed processing to gaming. Trade Wind (TW), a cloud deployment and management solution, offers a distributed processing feature that can be applied to real-time games to improve performance and user experience. The main objective of this paper is to demonstrate how TW can optimize online gaming. A hypothesis was created based on the current features of TW and it is explained on this paper.
{"title":"How Advanced Cloud Services Can Improve Gaming Performance","authors":"Artur Carvalho Zucchi, N. Gonzalez, Marcelo Risse de Andrade, Rosangela de Fatima Pereira, Walter Akio Goya, K. Langona, T. Carvalho, Jan-Erik Mångs","doi":"10.1109/CloudCom.2013.150","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.150","url":null,"abstract":"This paper presents the idea of applying distributed processing to gaming. Trade Wind (TW), a cloud deployment and management solution, offers a distributed processing feature that can be applied to real-time games to improve performance and user experience. The main objective of this paper is to demonstrate how TW can optimize online gaming. A hypothesis was created based on the current features of TW and it is explained on this paper.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123420071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.165
M. Rak, N. Suri, Jesus Luna, D. Petcu, V. Casola, Umberto Villano
The cloud offers attractive options to migrate corporate applications, without any implication for the corporate security manager to manage or to secure physical resources. While this ease of migration is appealing, several security issues arise: can the validity of corporate legal compliance regulations still be ensured for remote data storage? How is it possible to assess the Cloud Service Provider (CSP) ability to meet corporate security requirements? Can one monitor and enforce the agreed cloud security levels? Unfortunately, no comprehensive solutions exist for these issues. In this context, we introduce a new approach, named SPECS. It aims to offer mechanisms to specify cloud security requirements and to assess the security features offered by CSPs, and to integrate the desired security services (e.g., credential and access management) into cloud services with a Security-as-a-Service approach. Furthermore, SPECS intends to provide systematic approaches to negotiate, to monitor and to enforce the security parameters specified in Service Level Agreements (SLA), to develop and to deploy security services that are cloud SLA-aware and are implemented as an open-source Platform-as-a-Service (PaaS). This paper introduces the main concepts of SPECS.
{"title":"Security as a Service Using an SLA-Based Approach via SPECS","authors":"M. Rak, N. Suri, Jesus Luna, D. Petcu, V. Casola, Umberto Villano","doi":"10.1109/CloudCom.2013.165","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.165","url":null,"abstract":"The cloud offers attractive options to migrate corporate applications, without any implication for the corporate security manager to manage or to secure physical resources. While this ease of migration is appealing, several security issues arise: can the validity of corporate legal compliance regulations still be ensured for remote data storage? How is it possible to assess the Cloud Service Provider (CSP) ability to meet corporate security requirements? Can one monitor and enforce the agreed cloud security levels? Unfortunately, no comprehensive solutions exist for these issues. In this context, we introduce a new approach, named SPECS. It aims to offer mechanisms to specify cloud security requirements and to assess the security features offered by CSPs, and to integrate the desired security services (e.g., credential and access management) into cloud services with a Security-as-a-Service approach. Furthermore, SPECS intends to provide systematic approaches to negotiate, to monitor and to enforce the security parameters specified in Service Level Agreements (SLA), to develop and to deploy security services that are cloud SLA-aware and are implemented as an open-source Platform-as-a-Service (PaaS). This paper introduces the main concepts of SPECS.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"45 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131295524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.134
Kenn Slagter, Ching-Hsien Hsu, Yeh-Ching Chung
Big data refers to data that is so large that it exceeds the processing capabilities of traditional systems. Big data can be awkward to work and the storage, processing and analysis of big data can be problematic. MapReduce is a recent programming model that can handle big data. MapReduce achieves this by distributing the storage and processing of data amongst a large number of computers (nodes). However, this means the time required to process a MapReduce job is dependent on whichever node is last to complete a task. This problem is exacerbated by heterogeneous environments. In this paper we propose a method to improve MapReduce execution in heterogeneous environments. This is done by dynamically partitioning data during the Map phase and by using virtual machine mapping in the Reduce phase in order to maximize resource utilization.
{"title":"Dynamic Data Partitioning and Virtual Machine Mapping: Efficient Data Intensive Computation","authors":"Kenn Slagter, Ching-Hsien Hsu, Yeh-Ching Chung","doi":"10.1109/CloudCom.2013.134","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.134","url":null,"abstract":"Big data refers to data that is so large that it exceeds the processing capabilities of traditional systems. Big data can be awkward to work and the storage, processing and analysis of big data can be problematic. MapReduce is a recent programming model that can handle big data. MapReduce achieves this by distributing the storage and processing of data amongst a large number of computers (nodes). However, this means the time required to process a MapReduce job is dependent on whichever node is last to complete a task. This problem is exacerbated by heterogeneous environments. In this paper we propose a method to improve MapReduce execution in heterogeneous environments. This is done by dynamically partitioning data during the Map phase and by using virtual machine mapping in the Reduce phase in order to maximize resource utilization.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128298284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-02DOI: 10.1109/CloudCom.2013.104
A. Micsik, Peter Pallinger, Dávid Siklósi
The KOPI Online Plagiarism Search Portal - a nationwide plagiarism service in Hungary - is a unique, open service for web users that enables them to check for identical or similar contents between their own documents and the files uploaded by other authors. As our recent result, we can also detect cross-language plagiarism, but with a highly increased computational demand. The paper describes our experiment with the BonFIRE testbed to find a suitable scaling mechanism for translational plagiarism detection in a cloud federation.
{"title":"Scaling a Plagiarism Search Service on the BonFIRE Testbed","authors":"A. Micsik, Peter Pallinger, Dávid Siklósi","doi":"10.1109/CloudCom.2013.104","DOIUrl":"https://doi.org/10.1109/CloudCom.2013.104","url":null,"abstract":"The KOPI Online Plagiarism Search Portal - a nationwide plagiarism service in Hungary - is a unique, open service for web users that enables them to check for identical or similar contents between their own documents and the files uploaded by other authors. As our recent result, we can also detect cross-language plagiarism, but with a highly increased computational demand. The paper describes our experiment with the BonFIRE testbed to find a suitable scaling mechanism for translational plagiarism detection in a cloud federation.","PeriodicalId":198053,"journal":{"name":"2013 IEEE 5th International Conference on Cloud Computing Technology and Science","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130387693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}