A. Bauer, Mark Leznik, Md Shahriar Iqbal, Daniel Seybold, Igor A. Trubin, Benjamin Erb, Jörg Domaschka, Pooyan Jamshidi
The research field of data analytics has grown significantly with the increase of gathered and available data. Accordingly, a large number of tools, metrics, and best practices have been proposed to make sense of this vast amount of data. To this end, benchmarking and standardization are needed to understand the proposed approaches better and continuously improve them. For this purpose, numerous associations and committees exist. One of them is SPEC (Standard Performance Evaluation Corporation), a non-profit corporation for the standardization and benchmarking of performance and energy evaluations. This paper gives an overview of the recently established SPEC RG Predictive Data Analytics Working Group. The mission of this group is to foster interaction between industry and academia by contributing research to the standardization and benchmarking of various aspects of data analytics.
{"title":"SPEC Research - Introducing the Predictive Data Analytics Working Group: Poster Paper","authors":"A. Bauer, Mark Leznik, Md Shahriar Iqbal, Daniel Seybold, Igor A. Trubin, Benjamin Erb, Jörg Domaschka, Pooyan Jamshidi","doi":"10.1145/3491204.3527495","DOIUrl":"https://doi.org/10.1145/3491204.3527495","url":null,"abstract":"The research field of data analytics has grown significantly with the increase of gathered and available data. Accordingly, a large number of tools, metrics, and best practices have been proposed to make sense of this vast amount of data. To this end, benchmarking and standardization are needed to understand the proposed approaches better and continuously improve them. For this purpose, numerous associations and committees exist. One of them is SPEC (Standard Performance Evaluation Corporation), a non-profit corporation for the standardization and benchmarking of performance and energy evaluations. This paper gives an overview of the recently established SPEC RG Predictive Data Analytics Working Group. The mission of this group is to foster interaction between industry and academia by contributing research to the standardization and benchmarking of various aspects of data analytics.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129392189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junjie Li, A. Bobyr, Swen Boehm, W. Brantley, H. Brunst, Aurélien Cavelan, S. Chandrasekaran, Jimmy Cheng, F. Ciorba, Mathew E. Colgrove, Tony Curtis, Christopher Daley, Mauricio H. Ferrato, Mayara Gimenes de Souza, N. Hagerty, R. Henschel, G. Juckeland, J. Kelling, Kelvin Li, Ron Lieberman, Kevin B. McMahon, Egor Melnichenko, M. A. Neggaz, Hiroshi Ono, C. Ponder, Dave Raddatz, Severin Schueller, Robert Searles, Fedor Vasilev, V. G. M. Vergara, Bo Wang, Bert Wesarg, Sandra Wienke, Miguel Zavala
The SPEChpc 2021 suites are application-based benchmarks de- signed to measure performance of modern HPC systems. The bench- marks support MPI, MPI+OpenMP, MPI+OpenMP target offload, MPI+OpenACC and are portable across all major HPC platforms.
{"title":"SPEChpc 2021 Benchmark Suites for Modern HPC Systems","authors":"Junjie Li, A. Bobyr, Swen Boehm, W. Brantley, H. Brunst, Aurélien Cavelan, S. Chandrasekaran, Jimmy Cheng, F. Ciorba, Mathew E. Colgrove, Tony Curtis, Christopher Daley, Mauricio H. Ferrato, Mayara Gimenes de Souza, N. Hagerty, R. Henschel, G. Juckeland, J. Kelling, Kelvin Li, Ron Lieberman, Kevin B. McMahon, Egor Melnichenko, M. A. Neggaz, Hiroshi Ono, C. Ponder, Dave Raddatz, Severin Schueller, Robert Searles, Fedor Vasilev, V. G. M. Vergara, Bo Wang, Bert Wesarg, Sandra Wienke, Miguel Zavala","doi":"10.1145/3491204.3527498","DOIUrl":"https://doi.org/10.1145/3491204.3527498","url":null,"abstract":"The SPEChpc 2021 suites are application-based benchmarks de- signed to measure performance of modern HPC systems. The bench- marks support MPI, MPI+OpenMP, MPI+OpenMP target offload, MPI+OpenACC and are portable across all major HPC platforms.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130709266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This tutorial presents a performance engineering approach for optimizing the Quality of Service (QoS) of Edge/Fog/Cloud Computing environments using AI and Coupled-Simulation being developed as part of the Co-Simulation based Container Orchestration (COSCO) framework. It introduces fundamental AI and co-simulation concepts, their importance in QoS optimization and performance engineering challenges in the context of Fog computing. It also discusses how AI models, specifically, deep neural networks (DNNs), can be used in tandem with simulated estimates to take optimal resource management decisions. Additionally, we discuss a few use cases of training DNNs as surrogates to estimate key QoS metrics and utilize such models to build policies for dynamic scheduling in a distributed fog environment. The tutorial demonstrates these concepts using the COSCO framework. Metric monitoring and simulation primitives in COSCO demonstrates the efficacy of an AI and simulation based scheduler on a fog/cloud platform. Finally, we provide AI baselines for resource management problems that arise in the area of fog management.
{"title":"Optimizing the Performance of Fog Computing Environments Using AI and Co-Simulation","authors":"Shreshth Tuli, G. Casale","doi":"10.1145/3491204.3527490","DOIUrl":"https://doi.org/10.1145/3491204.3527490","url":null,"abstract":"This tutorial presents a performance engineering approach for optimizing the Quality of Service (QoS) of Edge/Fog/Cloud Computing environments using AI and Coupled-Simulation being developed as part of the Co-Simulation based Container Orchestration (COSCO) framework. It introduces fundamental AI and co-simulation concepts, their importance in QoS optimization and performance engineering challenges in the context of Fog computing. It also discusses how AI models, specifically, deep neural networks (DNNs), can be used in tandem with simulated estimates to take optimal resource management decisions. Additionally, we discuss a few use cases of training DNNs as surrogates to estimate key QoS metrics and utilize such models to build policies for dynamic scheduling in a distributed fog environment. The tutorial demonstrates these concepts using the COSCO framework. Metric monitoring and simulation primitives in COSCO demonstrates the efficacy of an AI and simulation based scheduler on a fog/cloud platform. Finally, we provide AI baselines for resource management problems that arise in the area of fog management.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131606045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With commits and releases, hundreds of tests are run on varying conditions (e.g., over different hardware and workload) that can help to understand evolution and ensure non-regression of software performance. We hypothesize that performance is not only sensitive to evolution of software, but also to different variability layers of its execution environment, spanning the hardware, the operating system, the build, or the workload processed by the software. Leveraging the MongoDB dataset, our results show that changes in hardware and workload can drastically impact performance evolution and thus should be taken into account when reasoning about performance. An open problem resulting from this study is how to manage the variability layers in order to efficiently test the performance evolution of a software.
{"title":"Beware of the Interactions of Variability Layers When Reasoning about Evolution of MongoDB","authors":"Luc Lesoil, M. Acher, Arnaud Blouin, J. Jézéquel","doi":"10.1145/3491204.3527489","DOIUrl":"https://doi.org/10.1145/3491204.3527489","url":null,"abstract":"With commits and releases, hundreds of tests are run on varying conditions (e.g., over different hardware and workload) that can help to understand evolution and ensure non-regression of software performance. We hypothesize that performance is not only sensitive to evolution of software, but also to different variability layers of its execution environment, spanning the hardware, the operating system, the build, or the workload processed by the software. Leveraging the MongoDB dataset, our results show that changes in hardware and workload can drastically impact performance evolution and thus should be taken into account when reasoning about performance. An open problem resulting from this study is how to manage the variability layers in order to efficiently test the performance evolution of a software.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129170878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the last decades, especially intensified by the pandemic situation in which many people stay at home and order goods online, the need for efficient logistics systems has increased significantly. Hence, the performance of optimization techniques for logistic processes are becoming more and more important. These techniques often require estimates about distances to customers and facilities where operators have to choose between exact results or short computation times. In this vision paper, we propose an approach for Flexible and Adaptive Distance Estimation (FADE). The central idea is to abstract map knowledge into a less complex graph to trade off between computation time and result accuracy. We propose to further apply concepts from self-aware computing in order to support the dynamic adaptation to individual goals.
{"title":"FADE: Towards Flexible and Adaptive Distance Estimation Considering Obstacles: Vision Paper","authors":"Marius Hadry, Veronika Lesch, Samuel Kounev","doi":"10.1145/3491204.3527493","DOIUrl":"https://doi.org/10.1145/3491204.3527493","url":null,"abstract":"In the last decades, especially intensified by the pandemic situation in which many people stay at home and order goods online, the need for efficient logistics systems has increased significantly. Hence, the performance of optimization techniques for logistic processes are becoming more and more important. These techniques often require estimates about distances to customers and facilities where operators have to choose between exact results or short computation times. In this vision paper, we propose an approach for Flexible and Adaptive Distance Estimation (FADE). The central idea is to abstract map knowledge into a less complex graph to trade off between computation time and result accuracy. We propose to further apply concepts from self-aware computing in order to support the dynamic adaptation to individual goals.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123306397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Queueing models of web service systems run at increasingly large scales, with large customer populations and with multiservers introduced by scaling up the services. "Scalable" multiserver approximations, in the sense that they that are insensitive to customer population size, are essential for solution in a reasonable time. A thorough analysis of the potential errors, which is needed before the approximations can be used with confidence, is the goal of this work. Three scalable approximations are evaluated: an equivalent single server SS, an approximation RF introduced by Rolia, and one based on a binomial distribution for queue state AB. AB and SS are suggested by previous work but have not been evaluated before. For AB and SS, multiple classes are merged into one to calculate the waiting. The analysis employs a novel traffic intensity measure for closed multiserver workloads. The vast majority of errors are less than 1%, with the worst cases being up to about 30%. The largest errors occur near the knee of the throughput/response time curves. Of the approximations, AB is consistently the most accurate and SS the least accurate.
{"title":"A Multiserver Approximation for Cloud Scaling Analysis","authors":"Siyu Zhou, C. Woodside","doi":"10.1145/3491204.3527472","DOIUrl":"https://doi.org/10.1145/3491204.3527472","url":null,"abstract":"Queueing models of web service systems run at increasingly large scales, with large customer populations and with multiservers introduced by scaling up the services. \"Scalable\" multiserver approximations, in the sense that they that are insensitive to customer population size, are essential for solution in a reasonable time. A thorough analysis of the potential errors, which is needed before the approximations can be used with confidence, is the goal of this work. Three scalable approximations are evaluated: an equivalent single server SS, an approximation RF introduced by Rolia, and one based on a binomial distribution for queue state AB. AB and SS are suggested by previous work but have not been evaluated before. For AB and SS, multiple classes are merged into one to calculate the waiting. The analysis employs a novel traffic intensity measure for closed multiserver workloads. The vast majority of errors are less than 1%, with the worst cases being up to about 30%. The largest errors occur near the knee of the throughput/response time curves. Of the approximations, AB is consistently the most accurate and SS the least accurate.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126278583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Testing software performance continuously can greatly benefit from automated verification done on continuous integration (CI) servers, but it generates a large number of performance test data with noise. To identify the change points in test data, statistical models have been developed in research. However, a considerable amount of detected change points is marked as the changes actually never need to be fixed (false positive). This work aims at giving a detailed understanding of the features of true positive change points and an automatic approach in change point triage, in order to alleviate project members' burdens. To achieve this goal, we begin by characterizing the change points using 31 features from three dimensions, namely time series, execution result, and file history. Then, we extract the proposed features for true positive and false positive change points, and train machine learning models to triage these change points. The results demonstrate that features can be efficiently employed to characterize change points. Our model achieves an AUC of 0.985 on a median basis.
{"title":"Characterizing and Triaging Change Points","authors":"Jing Chen, Haiyang Hu, Dongjin Yu","doi":"10.1145/3491204.3527487","DOIUrl":"https://doi.org/10.1145/3491204.3527487","url":null,"abstract":"Testing software performance continuously can greatly benefit from automated verification done on continuous integration (CI) servers, but it generates a large number of performance test data with noise. To identify the change points in test data, statistical models have been developed in research. However, a considerable amount of detected change points is marked as the changes actually never need to be fixed (false positive). This work aims at giving a detailed understanding of the features of true positive change points and an automatic approach in change point triage, in order to alleviate project members' burdens. To achieve this goal, we begin by characterizing the change points using 31 features from three dimensions, namely time series, execution result, and file history. Then, we extract the proposed features for true positive and false positive change points, and train machine learning models to triage these change points. The results demonstrate that features can be efficiently employed to characterize change points. Our model achieves an AUC of 0.985 on a median basis.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131158917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas F. Düllmann, A. Hoorn, Vladimir Yussupov, P. Jakovits, Mainak Adhikari
Despite today's fast and rapid modeling and deployment capabilities to meet customer requirements in an agile manner, testing is still of utmost importance to avoid outages, unsatisfied customers, and performance problems. To tackle such issues, (load) testing is one of several approaches. In this paper, we introduce the Continuous Testing Tool (CTT), which enables the modeling of tests and test infrastructures along with the cloud system under test, as well as deploying and executing (load) tests against a fully deployed system in an automated manner. CTT employs the OASIS TOSCA Standard to enable end-to-end support for continuous testing of cloud-based applications. We demonstrate CTT's workflow, its architecture, as well as its application to DevOps-oriented load testing and load testing of data pipelines.
{"title":"CTT: Load Test Automation for TOSCA-based Cloud Applications","authors":"Thomas F. Düllmann, A. Hoorn, Vladimir Yussupov, P. Jakovits, Mainak Adhikari","doi":"10.1145/3491204.3527484","DOIUrl":"https://doi.org/10.1145/3491204.3527484","url":null,"abstract":"Despite today's fast and rapid modeling and deployment capabilities to meet customer requirements in an agile manner, testing is still of utmost importance to avoid outages, unsatisfied customers, and performance problems. To tackle such issues, (load) testing is one of several approaches. In this paper, we introduce the Continuous Testing Tool (CTT), which enables the modeling of tests and test infrastructures along with the cloud system under test, as well as deploying and executing (load) tests against a fully deployed system in an automated manner. CTT employs the OASIS TOSCA Standard to enable end-to-end support for continuous testing of cloud-based applications. We demonstrate CTT's workflow, its architecture, as well as its application to DevOps-oriented load testing and load testing of data pipelines.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"467 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127705228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chetan Phalak, Dheeraj Chahal, Aniruddha Sen, Mayank Mishra, Rekha Singhal
Many Artificial Intelligence (AI) applications are composed of multiple machine learning (ML) and deep learning (DL) models. Intelligent process automation (IPA) requires a combination (sequential or parallel) of models to complete an inference task. These models have unique resource requirements and hence exploring cost-efficient high performance deployment architecture especially on multiple clouds, is a challenge. We propose a high performance framework MAPLE, to support the building of applications using composable models. The MAPLE framework is an innovative system for AI applications to (1) recommend various model compositions (2) recommend appropriate system configuration based on the application's non-functional requirements (3) estimate the performance and cost of deployment on cloud for the chosen design.
{"title":"MAPLE","authors":"Chetan Phalak, Dheeraj Chahal, Aniruddha Sen, Mayank Mishra, Rekha Singhal","doi":"10.1145/3491204.3527497","DOIUrl":"https://doi.org/10.1145/3491204.3527497","url":null,"abstract":"Many Artificial Intelligence (AI) applications are composed of multiple machine learning (ML) and deep learning (DL) models. Intelligent process automation (IPA) requires a combination (sequential or parallel) of models to complete an inference task. These models have unique resource requirements and hence exploring cost-efficient high performance deployment architecture especially on multiple clouds, is a challenge. We propose a high performance framework MAPLE, to support the building of applications using composable models. The MAPLE framework is an innovative system for AI applications to (1) recommend various model compositions (2) recommend appropriate system configuration based on the application's non-functional requirements (3) estimate the performance and cost of deployment on cloud for the chosen design.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122091678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Java uses automatic memory allocation where the user does not have to explicitly free used memory. This is done by the garbage collector. Garbage Collection (GC) can take up a significant amount of time, especially in Big Data applications running large workloads where garbage collection can take up to 50 percent of the application's run time. Although benchmarks have been designed to trace garbage collection events, these are not specifically suited for Big Data workloads, due to their unique memory usage patterns. We have developed a free and open source pipeline to extract and analyze object-level details from any Java program including benchmarks and Big Data applications such as Hadoop. The data contains information such as lifetime, class and allocation site of every object allocated by the program. Through the analysis of this data, we propose a small set of benchmarks designed to emulate some of the patterns observed in Big Data applications. These benchmarks also allow us to experiment and compare some Java programming patterns.
{"title":"Analysis of Garbage Collection Patterns to Extend Microbenchmarks for Big Data Workloads","authors":"Samyak S. Sarnayak, Aditi Ahuja, Pranav Kesavarapu, Aayush Naik, Santhosh Kumar Vasudevan, Subramaniam Kalambur","doi":"10.1145/3491204.3527473","DOIUrl":"https://doi.org/10.1145/3491204.3527473","url":null,"abstract":"Java uses automatic memory allocation where the user does not have to explicitly free used memory. This is done by the garbage collector. Garbage Collection (GC) can take up a significant amount of time, especially in Big Data applications running large workloads where garbage collection can take up to 50 percent of the application's run time. Although benchmarks have been designed to trace garbage collection events, these are not specifically suited for Big Data workloads, due to their unique memory usage patterns. We have developed a free and open source pipeline to extract and analyze object-level details from any Java program including benchmarks and Big Data applications such as Hadoop. The data contains information such as lifetime, class and allocation site of every object allocated by the program. Through the analysis of this data, we propose a small set of benchmarks designed to emulate some of the patterns observed in Big Data applications. These benchmarks also allow us to experiment and compare some Java programming patterns.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131216549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}