F. Farahnakian, T. Pahikkala, P. Liljeberg, J. Plosila
In order to resource management in a large-scale data center, we present a hierarchical agent-based architecture. In this architecture, multi agents cooperate together to minimize the number of active physical machines according to the current resource requirements. We proposed a local agent in each physical machine (PM) to determine the PM's status and a global agent to optimizes VM placement based on PM's status. Experimental results show the proposed architecture can minimize energy consumption while maintaining an acceptable QoS.
{"title":"Hierarchical Agent-Based Architecture for Resource Management in Cloud Data Centers","authors":"F. Farahnakian, T. Pahikkala, P. Liljeberg, J. Plosila","doi":"10.1109/CLOUD.2014.128","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.128","url":null,"abstract":"In order to resource management in a large-scale data center, we present a hierarchical agent-based architecture. In this architecture, multi agents cooperate together to minimize the number of active physical machines according to the current resource requirements. We proposed a local agent in each physical machine (PM) to determine the PM's status and a global agent to optimizes VM placement based on PM's status. Experimental results show the proposed architecture can minimize energy consumption while maintaining an acceptable QoS.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123921201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we focus on the problem of virtual machines (VMs) placement in geographically distributed data centers. We consider communicating VMs assigned to data centers that are connected over an IP-over-WDM network. We aim to plan and optimize the placement of VMs in data centers so as to minimize the IP-traffic within the backbone network. Thus, we propose first, a formulation which can be considered as a variant of the Hub Location problem modeling and we show its extreme difficulty for medium and large size instances. In order to overcome this difficulty, we reformulate the problem by multi-commodity flow, adopt variable aggregating methods and add valid inequalities to strengthen this new formulation. The different experiments that we present show the effectiveness of our last model in terms of running time and computational resources.
{"title":"Optimal Virtual Machine Placement in Large-Scale Cloud Systems","authors":"Hana Teyeb, Ali Balma, N. Hadj-Alouane, S. Tata","doi":"10.1109/CLOUD.2014.64","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.64","url":null,"abstract":"In this work, we focus on the problem of virtual machines (VMs) placement in geographically distributed data centers. We consider communicating VMs assigned to data centers that are connected over an IP-over-WDM network. We aim to plan and optimize the placement of VMs in data centers so as to minimize the IP-traffic within the backbone network. Thus, we propose first, a formulation which can be considered as a variant of the Hub Location problem modeling and we show its extreme difficulty for medium and large size instances. In order to overcome this difficulty, we reformulate the problem by multi-commodity flow, adopt variable aggregating methods and add valid inequalities to strengthen this new formulation. The different experiments that we present show the effectiveness of our last model in terms of running time and computational resources.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122382912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhrajit Ghosh, Angelo Sapello, A. Poylisher, C. Chiang, A. Kubota, T. Matsunaka
We present XSWAT (Xen SoftWare ATtestation), a system that makes use of timing based software attestation to verify the integrity of cloud computing platforms. We believe that ours is the first instance of a system that uses this attestation technique in a cloud environment and results obtained indicate the feasibility of its deployment. An overview of the XSWAT system and the associated threat model, along with a study of cloud environment impacts on performance, is presented. Environmental parameters include types of interconnects between the XSWAT verifier and measurement agent as well as the number of concurrently executing virtual machines on the platform being verified. Conversely, we also study the impact of XSWAT execution using well known system benchmarks and find this to be insignificant, thereby strengthening the case for XSWAT. We also discuss novel XSWAT mechanisms for addressing TOCTOU attacks.
{"title":"On the Feasibility of Deploying Software Attestation in Cloud Environments","authors":"Abhrajit Ghosh, Angelo Sapello, A. Poylisher, C. Chiang, A. Kubota, T. Matsunaka","doi":"10.1109/CLOUD.2014.27","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.27","url":null,"abstract":"We present XSWAT (Xen SoftWare ATtestation), a system that makes use of timing based software attestation to verify the integrity of cloud computing platforms. We believe that ours is the first instance of a system that uses this attestation technique in a cloud environment and results obtained indicate the feasibility of its deployment. An overview of the XSWAT system and the associated threat model, along with a study of cloud environment impacts on performance, is presented. Environmental parameters include types of interconnects between the XSWAT verifier and measurement agent as well as the number of concurrently executing virtual machines on the platform being verified. Conversely, we also study the impact of XSWAT execution using well known system benchmarks and find this to be insignificant, thereby strengthening the case for XSWAT. We also discuss novel XSWAT mechanisms for addressing TOCTOU attacks.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128825042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanzhang He, Xiaohong Jiang, Zhaohui Wu, Kejiang Ye, Zhongzhong Chen
With the rapid development of big data and cloud computing, big data analytics as a service in the cloud is becoming increasingly popular. More and more individuals and organizations tend to rent virtual cluster to store and analyze data rather than building their own data centers. However, in virtualization environment, whether scaling out using a cluster with more nodes to process big data is better than scaling up by adding more resources to the original virtual machines (VMs) in cluster is not clear. In this paper, we study the scalability performance issues of hadoop virtual cluster with cost consideration. We first present the design and implementation of VirtualMR platform which can provide users with scalable hadoop virtual cluster services for the MapReduce based big data analytics. Then we run a series of hadoop benchmarks and real parallel machine learning algorithms to evaluate the scalability performance, including scale-up method and scale-out method. Finally, we integrate our platform with resource monitoring module and propose a system tuner. By analyzing the monitored data, we dynamically adjust the parameters of hadoop framework and virtual machine configuration to improve resource utilization and reduce rent cost. Experimental results show that the scale-up method outperforms the scale-out method for CPU-bound applications, and it is opposite for I/O-bound applications. The results also verify the efficiency of system tuner to increase resource utilization and reduce rent cost.
{"title":"Scalability Analysis and Improvement of Hadoop Virtual Cluster with Cost Consideration","authors":"Yanzhang He, Xiaohong Jiang, Zhaohui Wu, Kejiang Ye, Zhongzhong Chen","doi":"10.1109/CLOUD.2014.85","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.85","url":null,"abstract":"With the rapid development of big data and cloud computing, big data analytics as a service in the cloud is becoming increasingly popular. More and more individuals and organizations tend to rent virtual cluster to store and analyze data rather than building their own data centers. However, in virtualization environment, whether scaling out using a cluster with more nodes to process big data is better than scaling up by adding more resources to the original virtual machines (VMs) in cluster is not clear. In this paper, we study the scalability performance issues of hadoop virtual cluster with cost consideration. We first present the design and implementation of VirtualMR platform which can provide users with scalable hadoop virtual cluster services for the MapReduce based big data analytics. Then we run a series of hadoop benchmarks and real parallel machine learning algorithms to evaluate the scalability performance, including scale-up method and scale-out method. Finally, we integrate our platform with resource monitoring module and propose a system tuner. By analyzing the monitored data, we dynamically adjust the parameters of hadoop framework and virtual machine configuration to improve resource utilization and reduce rent cost. Experimental results show that the scale-up method outperforms the scale-out method for CPU-bound applications, and it is opposite for I/O-bound applications. The results also verify the efficiency of system tuner to increase resource utilization and reduce rent cost.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129491160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtualization is one of the key enabling technologies for cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in cloud. Disk I/O requests in a typical cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant cloud environment. We demonstrate that this framework achieves appreciable enhancements in I/O performance indicating that this approach is a promising step towards enabling QoS guarantees on cloud storage.
{"title":"PriDyn: Framework for Performance Specific QoS in Cloud Storage","authors":"Nitisha Jain, J. Lakshmi","doi":"10.1109/CLOUD.2014.15","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.15","url":null,"abstract":"Virtualization is one of the key enabling technologies for cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in cloud. Disk I/O requests in a typical cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant cloud environment. We demonstrate that this framework achieves appreciable enhancements in I/O performance indicating that this approach is a promising step towards enabling QoS guarantees on cloud storage.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130461693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Snyder, R. Green, V. Devabhaktuni, Mansoor Alam
The cloud computing paradigm has ushered in the need for new methods of evaluating the performance in a given cloud computing systems (CCS) in order to ensure customer and service level agreement satisfaction. This study proposes a method for evaluating the reliability of a CCS alongside the corresponding performance metrics. Specifically, and for the first time, non-sequential Monte Carlo simulation (MCS) is used to evaluate CCS reliability at a system-wide scale. Results demonstrate that the proposed method is promising and may apply to systems at a large scale.
{"title":"Evaluation of Highly Reliable Cloud Computing Systems Using Non-sequential Monte Carlo Simulation","authors":"B. Snyder, R. Green, V. Devabhaktuni, Mansoor Alam","doi":"10.1109/CLOUD.2014.133","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.133","url":null,"abstract":"The cloud computing paradigm has ushered in the need for new methods of evaluating the performance in a given cloud computing systems (CCS) in order to ensure customer and service level agreement satisfaction. This study proposes a method for evaluating the reliability of a CCS alongside the corresponding performance metrics. Specifically, and for the first time, non-sequential Monte Carlo simulation (MCS) is used to evaluate CCS reliability at a system-wide scale. Results demonstrate that the proposed method is promising and may apply to systems at a large scale.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"5 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130672639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hind Benfenatki, Catarina Ferreira Da Silva, A. Benharkat, P. Ghodous, F. Biennier
The purpose of this paper is to define a generic methodology for semi automatic development of cloud-based business applications. This can be used by non-IT experts, such as business stakeholders, who trigger a business application development by simply stating its requirements in terms of business functionalities and constraints, QoS parameters, and her/his preferences. From these functionalities and constraints, Linked USDL requirements files are automatically generated. These files provide the basis for the cloud service discovery and launch the automatic development of cloud business applications. We present the first developments of our prototype.
{"title":"Methodology for Semi-automatic Development of Cloud-Based Business Applications","authors":"Hind Benfenatki, Catarina Ferreira Da Silva, A. Benharkat, P. Ghodous, F. Biennier","doi":"10.1109/CLOUD.2014.139","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.139","url":null,"abstract":"The purpose of this paper is to define a generic methodology for semi automatic development of cloud-based business applications. This can be used by non-IT experts, such as business stakeholders, who trigger a business application development by simply stating its requirements in terms of business functionalities and constraints, QoS parameters, and her/his preferences. From these functionalities and constraints, Linked USDL requirements files are automatically generated. These files provide the basis for the cloud service discovery and launch the automatic development of cloud business applications. We present the first developments of our prototype.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130590663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlo Marcelo Revoredo da Silva, Jose Silva, Ricardo Marinho Melo, Ricardo Batista Rodrigues, Lucien Rocha Lucien, Sandro Pereira De Melo, Adolfo Colares, V. Garcia
The purpose of this article is to present a PrivacyMaturity Model of services offered by Cloud ComputingProviders in the context of Cloud Storage. This study aims topresent an overview of the current barriers in these scenariosand present a model based on technical analysis of maturity inthese environments. We present the goals to be achieved in thisresearch, as well as the strategies to be pursued to the contents of sensitive data in order to establish a level of effectiveprivacy. Also featuring is planning an architectural model as aprototype, and set in stages as its research and implementation.
{"title":"A Privacy Maturity Model for Cloud Storage Services","authors":"Carlo Marcelo Revoredo da Silva, Jose Silva, Ricardo Marinho Melo, Ricardo Batista Rodrigues, Lucien Rocha Lucien, Sandro Pereira De Melo, Adolfo Colares, V. Garcia","doi":"10.1109/CLOUD.2014.135","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.135","url":null,"abstract":"The purpose of this article is to present a PrivacyMaturity Model of services offered by Cloud ComputingProviders in the context of Cloud Storage. This study aims topresent an overview of the current barriers in these scenariosand present a model based on technical analysis of maturity inthese environments. We present the goals to be achieved in thisresearch, as well as the strategies to be pursued to the contents of sensitive data in order to establish a level of effectiveprivacy. Also featuring is planning an architectural model as aprototype, and set in stages as its research and implementation.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125516961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qi Zhang, Ling Liu, Kisung Lee, Yang Zhou, Aameek Singh, N. Mandagere, Sandeep Gopisetty, Gabriel Alatorre
With more data generated and collected in a geographically distributed manner, combined by the increased computational requirements for large scale data-intensive analysis, we have witnessed the growing demand for geographically distributed Cloud datacenters and hybrid Cloud service provisioning, enabling organizations to support instantaneous demand of additional computational resources and to expand inhouse resources to maintain peak service demands by utilizing cloud resources. A key challenge for running applications in such a geographically distributed computing environment is how to efficiently schedule and perform analysis over data that is geographically distributed across multiple datacenters. In this paper, we first compare multi-datacenter Hadoop deployment with single-datacenter Hadoop deployment to identify the performance issues inherent in a geographically distributed cloud. A generalization of the problem characterization in the context of geographically distributed cloud datacenters is also provided with discussions on general optimization strategies. Then we describe the design and implementation of a suite of system-level optimizations for improving performance of Hadoop service provisioning in a geo-distributed cloud, including prediction-based job localization, configurable HDFS data placement, and data prefetching. Our experimental evaluation shows that our prediction based localization has very low error ratio, smaller than 5%, and our optimization can improve the execution time of Reduce phase by 48.6%.
{"title":"Improving Hadoop Service Provisioning in a Geographically Distributed Cloud","authors":"Qi Zhang, Ling Liu, Kisung Lee, Yang Zhou, Aameek Singh, N. Mandagere, Sandeep Gopisetty, Gabriel Alatorre","doi":"10.1109/CLOUD.2014.65","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.65","url":null,"abstract":"With more data generated and collected in a geographically distributed manner, combined by the increased computational requirements for large scale data-intensive analysis, we have witnessed the growing demand for geographically distributed Cloud datacenters and hybrid Cloud service provisioning, enabling organizations to support instantaneous demand of additional computational resources and to expand inhouse resources to maintain peak service demands by utilizing cloud resources. A key challenge for running applications in such a geographically distributed computing environment is how to efficiently schedule and perform analysis over data that is geographically distributed across multiple datacenters. In this paper, we first compare multi-datacenter Hadoop deployment with single-datacenter Hadoop deployment to identify the performance issues inherent in a geographically distributed cloud. A generalization of the problem characterization in the context of geographically distributed cloud datacenters is also provided with discussions on general optimization strategies. Then we describe the design and implementation of a suite of system-level optimizations for improving performance of Hadoop service provisioning in a geo-distributed cloud, including prediction-based job localization, configurable HDFS data placement, and data prefetching. Our experimental evaluation shows that our prediction based localization has very low error ratio, smaller than 5%, and our optimization can improve the execution time of Reduce phase by 48.6%.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126943851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A BP is a series of logically related tasks implemented by a set of applications/services performed together to produce a defined set of results. The cloud resources scheduling to BP tasks is a difficult problem. Due that, first, it considers the dependencies and communication between tasks within a BP. Second, it takes into account several objectives like minimizing the execution time, minimizing the execution cost, maximizing the resource utilization. Besides, BP execution can be affected by a set of contextual information such as the unavailability of resources, the overloading of network, etc. which make the scheduling problem more complex. In this paper, we propose a context-based scheduling approach for adaptive BP in the cloud.
{"title":"A Context Based Scheduling Approach for Adaptive Business Process in the Cloud","authors":"Molka Rekik, Khouloud Boukadi, H. Ben-Abdallah","doi":"10.1109/CLOUD.2014.137","DOIUrl":"https://doi.org/10.1109/CLOUD.2014.137","url":null,"abstract":"A BP is a series of logically related tasks implemented by a set of applications/services performed together to produce a defined set of results. The cloud resources scheduling to BP tasks is a difficult problem. Due that, first, it considers the dependencies and communication between tasks within a BP. Second, it takes into account several objectives like minimizing the execution time, minimizing the execution cost, maximizing the resource utilization. Besides, BP execution can be affected by a set of contextual information such as the unavailability of resources, the overloading of network, etc. which make the scheduling problem more complex. In this paper, we propose a context-based scheduling approach for adaptive BP in the cloud.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127170879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}