We propose a method for robustly detecting and extracting repeated signal components within a source signal. The method is based on the recently introduced shift autocorrelation (shift-ACF) which outperforms classical ACF in signal detection if a signal component is repeated more than once. In this paper, we extend shift-ACF to analyze the spectral structure of repeating signal components by using a subband decomposition. Subsequently, an algorithm for repeated event detection and extraction is proposed. An evaluation shows that the proposed subband shift-ACF outperforms detection based on classical cepstrum. We discuss several possible applications in the domain of sensor signal analysis, and particularly in audio monitoring.
{"title":"Robust Detection and Pattern Extraction of Repeated Signal Components Using Subband Shift-ACF","authors":"F. Kurth","doi":"10.1109/IC2E.2014.26","DOIUrl":"https://doi.org/10.1109/IC2E.2014.26","url":null,"abstract":"We propose a method for robustly detecting and extracting repeated signal components within a source signal. The method is based on the recently introduced shift autocorrelation (shift-ACF) which outperforms classical ACF in signal detection if a signal component is repeated more than once. In this paper, we extend shift-ACF to analyze the spectral structure of repeating signal components by using a subband decomposition. Subsequently, an algorithm for repeated event detection and extraction is proposed. An evaluation shows that the proposed subband shift-ACF outperforms detection based on classical cepstrum. We discuss several possible applications in the domain of sensor signal analysis, and particularly in audio monitoring.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132571385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Live migration of virtual machines (VMs) can benefit data centers through load balancing, fault tolerance, energy saving, etc. Although live migration between geographically distributed data centers can enable optimized scheduling of resources in a large area, it remains expensive and difficult to implement. One of the main challenges is transferring the memory state over WAN. There is a conflict between the low data transmission speed over WAN and the rapid change of memory contents. This paper proposes a novel live migration method with page-count-based data deduplication, which takes advantage of the fact that VMs running same or similar operating systems and other software tend to have identical memory pages. Template pages are selected based on number of occurrences of each page across multiple VMs and indexed by content hash. When a memory page is transferred, the source host first compares it with the templates. If a match is identified, the source host transfers the index instead of the data of the memory page. The experimental results show that our approach reduces the migration time by 27% and the data transferred by 38% on average compared to the default method of QEMU-KVM.
{"title":"Template-based memory deduplication method for inter-data center live migration of virtual machines","authors":"Mingyu Li, Mian Zheng, Xiaohui Hu","doi":"10.1109/IC2E.2014.61","DOIUrl":"https://doi.org/10.1109/IC2E.2014.61","url":null,"abstract":"Live migration of virtual machines (VMs) can benefit data centers through load balancing, fault tolerance, energy saving, etc. Although live migration between geographically distributed data centers can enable optimized scheduling of resources in a large area, it remains expensive and difficult to implement. One of the main challenges is transferring the memory state over WAN. There is a conflict between the low data transmission speed over WAN and the rapid change of memory contents. This paper proposes a novel live migration method with page-count-based data deduplication, which takes advantage of the fact that VMs running same or similar operating systems and other software tend to have identical memory pages. Template pages are selected based on number of occurrences of each page across multiple VMs and indexed by content hash. When a memory page is transferred, the source host first compares it with the templates. If a match is identified, the source host transfers the index instead of the data of the memory page. The experimental results show that our approach reduces the migration time by 27% and the data transferred by 38% on average compared to the default method of QEMU-KVM.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125363361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-defined networking has emerged as a promising solution for supporting dynamic network functions and intelligent applications through decoupling control plane from forwarding plane. OpenFlow is the first standardized open management interface of SDN architecture. But it is unrealistic to simply swaping out conventional networks for new infrastructure. How to integrate OpenFlow with existing networks is still a serious challenge. We propose a tunnel splicing mechanism for heterogeneous network with MPLS and OpenFlow routers. Two key mechanisms were suggested: first, abstract the underlying network devices into uniformed nodes in order to shield the details of various equipments, second, strip the manipulation of flow table and lable switch table from controller and fulfill it in an independent module. This new paradigm has been developed on Linux system and tests have been carried out in experiment networks. The emulation results proved its feasibility and efficiency.
{"title":"Splicing MPLS and OpenFlow Tunnels Based on SDN Paradigm","authors":"Xiaogang Tu, Xin Li, Jiangang Zhou, Shanzhi Chen","doi":"10.1109/IC2E.2014.20","DOIUrl":"https://doi.org/10.1109/IC2E.2014.20","url":null,"abstract":"Software-defined networking has emerged as a promising solution for supporting dynamic network functions and intelligent applications through decoupling control plane from forwarding plane. OpenFlow is the first standardized open management interface of SDN architecture. But it is unrealistic to simply swaping out conventional networks for new infrastructure. How to integrate OpenFlow with existing networks is still a serious challenge. We propose a tunnel splicing mechanism for heterogeneous network with MPLS and OpenFlow routers. Two key mechanisms were suggested: first, abstract the underlying network devices into uniformed nodes in order to shield the details of various equipments, second, strip the manipulation of flow table and lable switch table from controller and fulfill it in an independent module. This new paradigm has been developed on Linux system and tests have been carried out in experiment networks. The emulation results proved its feasibility and efficiency.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"CE-27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114123034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing has recently emerged compelling paradigm by introducing several characteristics such as on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. Despite the fact that cloud computing offers huge cost benefits for companies, the unique security challenges have been introduced in a cloud environment that make risk assessment challenging. Cloud consumers need a protection to their cloud applications against cyber attacks. Although some security controls and policies are devised for each element of cloud computing, we need a framework with overall quantitative risk assessment model. The aim of this paper is to propose a framework for assessing the security risks associated with cloud computing platforms. The fully quantitative, iterative, and incremental approach enables cloud customer/provider to assess and manage cloud security risks. A proper result of risk assessment leads to have appropriate risk management mechanism for mitigating risks and reach to an acceptance security level.
{"title":"Cloud Computing: A Risk Assessment Model","authors":"Alireza Shameli-Sendi, M. Cheriet","doi":"10.1109/IC2E.2014.17","DOIUrl":"https://doi.org/10.1109/IC2E.2014.17","url":null,"abstract":"Cloud computing has recently emerged compelling paradigm by introducing several characteristics such as on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. Despite the fact that cloud computing offers huge cost benefits for companies, the unique security challenges have been introduced in a cloud environment that make risk assessment challenging. Cloud consumers need a protection to their cloud applications against cyber attacks. Although some security controls and policies are devised for each element of cloud computing, we need a framework with overall quantitative risk assessment model. The aim of this paper is to propose a framework for assessing the security risks associated with cloud computing platforms. The fully quantitative, iterative, and incremental approach enables cloud customer/provider to assess and manage cloud security risks. A proper result of risk assessment leads to have appropriate risk management mechanism for mitigating risks and reach to an acceptance security level.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116830487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Petri, T. Beach, Mengsong Zou, J. Montes, O. Rana, M. Parashar
One of the key benefits of Cloud systems is their ability to provide elastic, on-demand (seemingly infinite) computing capability and performance for supporting service delivery. With the resource availability in single data centres proving to be limited, the option of obtaining extra-resources from a collection of Cloud providers has appeared as an efficacious solution. The ability to utilize resources from multiple Cloud providers is also often mentioned as a means to: (i) prevent vendor lock in, (ii) to enable in house capacity to be combined with an external Cloud provider, (iii) combine specialist capability from multiple Cloud vendors (especially when one vendor does not offer such capability or where such capability may come at a higher price). Such federation of Cloud systems can therefore overcome a limit in capacity and enable providers to dynamically increase the availability of resources to serve requests. We describe and evaluate the establishment of such a federation using a CometCloud based implementation, and consider a number of federation policies with associated scenarios and determine the impact of such policies on the overall status of our system. CometCloud provides an overlay that enables multiple types of Cloud systems (both public and private) to be federated through the use of specialist gateways. We describe how two physical sites, in the UK and the US, can be federated in a seamless way using this system.
{"title":"Exploring Models and Mechanisms for Exchanging Resources in a Federated Cloud","authors":"I. Petri, T. Beach, Mengsong Zou, J. Montes, O. Rana, M. Parashar","doi":"10.1109/IC2E.2014.9","DOIUrl":"https://doi.org/10.1109/IC2E.2014.9","url":null,"abstract":"One of the key benefits of Cloud systems is their ability to provide elastic, on-demand (seemingly infinite) computing capability and performance for supporting service delivery. With the resource availability in single data centres proving to be limited, the option of obtaining extra-resources from a collection of Cloud providers has appeared as an efficacious solution. The ability to utilize resources from multiple Cloud providers is also often mentioned as a means to: (i) prevent vendor lock in, (ii) to enable in house capacity to be combined with an external Cloud provider, (iii) combine specialist capability from multiple Cloud vendors (especially when one vendor does not offer such capability or where such capability may come at a higher price). Such federation of Cloud systems can therefore overcome a limit in capacity and enable providers to dynamically increase the availability of resources to serve requests. We describe and evaluate the establishment of such a federation using a CometCloud based implementation, and consider a number of federation policies with associated scenarios and determine the impact of such policies on the overall status of our system. CometCloud provides an overlay that enables multiple types of Cloud systems (both public and private) to be federated through the use of specialist gateways. We describe how two physical sites, in the UK and the US, can be federated in a seamless way using this system.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115665757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SilverLine is a novel, exceptionally modular framework for enforcing mandatory information flow policies for Java computations on commodity, data-processing, Platform-as-a-Service clouds by leveraging Aspect-Oriented Programming (AOP) and In-lined Reference Monitors (IRMs). Unlike traditional system-level approaches, which typically require modifications to the cloud kernel software, OS/hypervisor, VM, or cloud file system, SilverLine automatically in-lines secure information flow tracking code into untrusted Java binaries as they arrive at the cloud. This facilitates efficient enforcement of a large, flexible class of information flow and mandatory access control policies without any customization of the cloud or its underlying infrastructure. The cloud and the enforcement framework can therefore be maintained completely separately and orthogonally (i.e., modularly). To demonstrate the approach's feasibility, a prototype implements and deploys SilverLine on a real-world data processing cloud-Hadoop MapReduce. Evaluation results demonstrate that SilverLine provides inter-process information flow security for Hadoop clouds with easy maintainability (through modularity) and low overhead.
{"title":"Silver Lining: Enforcing Secure Information Flow at the Cloud Edge","authors":"S. Khan, Kevin W. Hamlen, Murat Kantarcioglu","doi":"10.1109/IC2E.2014.83","DOIUrl":"https://doi.org/10.1109/IC2E.2014.83","url":null,"abstract":"SilverLine is a novel, exceptionally modular framework for enforcing mandatory information flow policies for Java computations on commodity, data-processing, Platform-as-a-Service clouds by leveraging Aspect-Oriented Programming (AOP) and In-lined Reference Monitors (IRMs). Unlike traditional system-level approaches, which typically require modifications to the cloud kernel software, OS/hypervisor, VM, or cloud file system, SilverLine automatically in-lines secure information flow tracking code into untrusted Java binaries as they arrive at the cloud. This facilitates efficient enforcement of a large, flexible class of information flow and mandatory access control policies without any customization of the cloud or its underlying infrastructure. The cloud and the enforcement framework can therefore be maintained completely separately and orthogonally (i.e., modularly). To demonstrate the approach's feasibility, a prototype implements and deploys SilverLine on a real-world data processing cloud-Hadoop MapReduce. Evaluation results demonstrate that SilverLine provides inter-process information flow security for Hadoop clouds with easy maintainability (through modularity) and low overhead.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Furquan Shaikh, Fangzhou Yao, Indranil Gupta, R. Campbell
Virtualization techniques are widely used in cloud computing environments today. Such environments are installed with a large number of similar virtual instances sharing the same physical infrastructure. In this paper, we focus on the memory usage optimization across virtual machines by automatically de-duplicating the memory on per-page basis. Our approach maintains a single copy of the duplicated pages in physical memory using copy-on-write mechanism. Unlike some existing strategies, which are intended only for applications and need user configuration, VMDedup provides an automatic memory de-duplication support within the hypervisor to achieve benefits across operating system code, data as well as application binaries. We have implemented a prototype of this system within the Xen hypervisor to support both para-virtualized and fully-virtualized instances of operating systems.
{"title":"VMDedup: Memory De-duplication in Hypervisor","authors":"Furquan Shaikh, Fangzhou Yao, Indranil Gupta, R. Campbell","doi":"10.1109/IC2E.2014.69","DOIUrl":"https://doi.org/10.1109/IC2E.2014.69","url":null,"abstract":"Virtualization techniques are widely used in cloud computing environments today. Such environments are installed with a large number of similar virtual instances sharing the same physical infrastructure. In this paper, we focus on the memory usage optimization across virtual machines by automatically de-duplicating the memory on per-page basis. Our approach maintains a single copy of the duplicated pages in physical memory using copy-on-write mechanism. Unlike some existing strategies, which are intended only for applications and need user configuration, VMDedup provides an automatic memory de-duplication support within the hypervisor to achieve benefits across operating system code, data as well as application binaries. We have implemented a prototype of this system within the Xen hypervisor to support both para-virtualized and fully-virtualized instances of operating systems.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"581 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122500732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes approaches for supporting virtual organizations (VOs) in OpenStack. A VO provides a security and discovery context whereby collaboration across multiple sites can be enabled while enforcing joint security policies. VOs were developed in the grid computing arena to manage international scientific collaborations. However, the VO abstraction is not grid-specific and is applicable in any distributed computing environment, including inter-clouds. To address the need to securely manage on-demand data sharing in disaster response efforts, we prototyped VO support in OpenStack. To evaluate our practical implementation approach, and other related work, we systematically define the VO design space. This also allows us to clearly identify outstanding issues and recommendations for future work to realize effective cloud federations.
{"title":"Approaches for Virtual Organization Support in OpenStack","authors":"Craig A. Lee, N. Desai","doi":"10.1109/IC2E.2014.35","DOIUrl":"https://doi.org/10.1109/IC2E.2014.35","url":null,"abstract":"This paper describes approaches for supporting virtual organizations (VOs) in OpenStack. A VO provides a security and discovery context whereby collaboration across multiple sites can be enabled while enforcing joint security policies. VOs were developed in the grid computing arena to manage international scientific collaborations. However, the VO abstraction is not grid-specific and is applicable in any distributed computing environment, including inter-clouds. To address the need to securely manage on-demand data sharing in disaster response efforts, we prototyped VO support in OpenStack. To evaluate our practical implementation approach, and other related work, we systematically define the VO design space. This also allows us to clearly identify outstanding issues and recommendations for future work to realize effective cloud federations.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122921104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a prototype of Taiwan UniCloud, a community-driven hybrid cloud platform for academics in Taiwan. The goal is to leverage resources in multiple clouds among different organizations. Each self-managing cloud can join the UniCloud platform to share its resources and simultaneously benefit from other clouds with scale-out capabilities. Accordingly, resources are elastic and sharable with each other such as to afford unexpected resource demands to each cloud. The proposed platform provides a web portal to operate each cloud via a uniform user interface. The construction of virtual clusters with multi-core VMs is supplied for parallel and distributed processing models. An object-based storage system is also delivered to federate different storage providers. This paper not only presents the architectural design of Taiwan UniCloud, but also evaluates the performance to demonstrate the possibility of current implementation. Experimental results show the feasibility of the proposed platform as well as the benefit from the cloud federation.
{"title":"Taiwan UniCloud: A Cloud Testbed with Collaborative Cloud Services","authors":"Wu-Chun Chung, Po-Chi Shih, Kuan-Chou Lai, Kuan-Ching Li, Che-Rung Lee, J. Chou, Ching-Hsien Hsu, Yeh-Ching Chung","doi":"10.1109/IC2E.2014.28","DOIUrl":"https://doi.org/10.1109/IC2E.2014.28","url":null,"abstract":"This paper introduces a prototype of Taiwan UniCloud, a community-driven hybrid cloud platform for academics in Taiwan. The goal is to leverage resources in multiple clouds among different organizations. Each self-managing cloud can join the UniCloud platform to share its resources and simultaneously benefit from other clouds with scale-out capabilities. Accordingly, resources are elastic and sharable with each other such as to afford unexpected resource demands to each cloud. The proposed platform provides a web portal to operate each cloud via a uniform user interface. The construction of virtual clusters with multi-core VMs is supplied for parallel and distributed processing models. An object-based storage system is also delivered to federate different storage providers. This paper not only presents the architectural design of Taiwan UniCloud, but also evaluates the performance to demonstrate the possibility of current implementation. Experimental results show the feasibility of the proposed platform as well as the benefit from the cloud federation.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115961041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past five years there has been an increasing desire to reduce the environmental impact of cloud computing. Recent proposals have suggested taking a "net-zero" energy approach, where the data centre supplies as much energy to the grid as it takes to reduce the environmental impact. In this paper we propose the Nihil system which uses energy storage and load balancing based upon energy storage levels to provide a system which delivers reliable cloud service with no carbon emissions and no reliance on the grid.
{"title":"Nihil: Computing Clouds with Zero Emissions","authors":"Joseph Doyle, D. O'Mahony","doi":"10.1109/IC2E.2014.65","DOIUrl":"https://doi.org/10.1109/IC2E.2014.65","url":null,"abstract":"Over the past five years there has been an increasing desire to reduce the environmental impact of cloud computing. Recent proposals have suggested taking a \"net-zero\" energy approach, where the data centre supplies as much energy to the grid as it takes to reduce the environmental impact. In this paper we propose the Nihil system which uses energy storage and load balancing based upon energy storage levels to provide a system which delivers reliable cloud service with no carbon emissions and no reliance on the grid.","PeriodicalId":273902,"journal":{"name":"2014 IEEE International Conference on Cloud Engineering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}