The recent approaches to software performance modeling and validation share the idea of annotating software models with information related to performance (e.g. operational profile) and transforming the annotated model into a performance model (e.g. a Stochastic Petri Net). Up to date, no standard has been defined to represent the information related to performance in software artifacts, although clear advantages in tool interoperability and model transformations would stem from it. This paper is aimed at questioning whether a software performance ontology (i.e. a standard set of concepts and relations) is achievable or not. We consider three meta-models defined for software performance, that are the Schedulability, Performance and Time profile of UML, the Core Scenario Model and the Software Performance Engineering meta-model. We devise two approaches to the creation of an ontology: (i) bottom-up, that extracts common knowledge from the meta-models, (ii) top-down, that is driven from a set of requirements.
最近的软件性能建模和验证方法都有一个共同的理念,即用与性能相关的信息(如运行状况)注释软件模型,并将注释模型转换为性能模型(如随机 Petri 网)。迄今为止,尽管在工具互操作性和模型转换方面具有明显优势,但还没有为在软件工件中表示性能相关信息定义标准。本文旨在探讨软件性能本体(即一套标准的概念和关系)是否可以实现。我们考虑了为软件性能定义的三个元模型,即 UML 的可调度性、性能和时间轮廓、核心场景模型和软件性能工程元模型。我们设计了两种创建本体的方法:(i) 自下而上,从元模型中提取常识;(ii) 自上而下,从一组需求中提取常识。
{"title":"How far are we from the definition of a common software performance ontology?","authors":"V. Cortellessa","doi":"10.1145/1071021.1071044","DOIUrl":"https://doi.org/10.1145/1071021.1071044","url":null,"abstract":"The recent approaches to software performance modeling and validation share the idea of annotating software models with information related to performance (e.g. operational profile) and transforming the annotated model into a performance model (e.g. a Stochastic Petri Net). Up to date, no standard has been defined to represent the information related to performance in software artifacts, although clear advantages in tool interoperability and model transformations would stem from it. This paper is aimed at questioning whether a software performance ontology (i.e. a standard set of concepts and relations) is achievable or not. We consider three meta-models defined for software performance, that are the Schedulability, Performance and Time profile of UML, the Core Scenario Model and the Software Performance Engineering meta-model. We devise two approaches to the creation of an ontology: (i) bottom-up, that extracts common knowledge from the meta-models, (ii) top-down, that is driven from a set of requirements.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127068622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quality of service requirements are normally given in terms of soft deadlines, such as "90% of responses should complete within one second". To estimate the probability of meeting the target delay, one must estimate the distribution of response time, or at least its tail. Exact analytic methods based on state-space analysis suffer from state explosion, and simulation, which is also feasible, is very time consuming. Rapid approximate estimation would be valuable, especially for those cases which do not demand great precision, and which require the exploration of many alternative models.This work adapts layered queueing analysis, which is highly scalable and provides variance estimates as well as mean values, to estimate soft deadline success rates. It evaluates the use of an approximate Gamma distribution fitted to the mean and variance, and its application to examples of software systems. The evaluation finds that, for a definable set of situations, the tail probabilities over 90% are estimated well within a margin of 1% accuracy, which is useful for practical purposes.
{"title":"Fast estimation of probabilities of soft deadline misses in layered software performance models","authors":"T. Zheng, C. Woodside","doi":"10.1145/1071021.1071041","DOIUrl":"https://doi.org/10.1145/1071021.1071041","url":null,"abstract":"Quality of service requirements are normally given in terms of soft deadlines, such as \"90% of responses should complete within one second\". To estimate the probability of meeting the target delay, one must estimate the distribution of response time, or at least its tail. Exact analytic methods based on state-space analysis suffer from state explosion, and simulation, which is also feasible, is very time consuming. Rapid approximate estimation would be valuable, especially for those cases which do not demand great precision, and which require the exploration of many alternative models.This work adapts layered queueing analysis, which is highly scalable and provides variance estimates as well as mean values, to estimate soft deadline success rates. It evaluates the use of an approximate Gamma distribution fitted to the mean and variance, and its application to examples of software systems. The evaluation finds that, for a definable set of situations, the tail probabilities over 90% are estimated well within a margin of 1% accuracy, which is useful for practical purposes.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124509799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With software systems becoming more complex, and handling diverse and critical applications, the need for their thorough evaluation has become ever more important at each phase of software development. With the prevalent use of component-based design, the software architecture as well as the behavior of the individual components of the system needs to be taken into account when evaluating it. In recent past a number of studies have focused on architecture based reliability estimation. But areas such as security and cache behavior still lack such an approach. In this paper we propose an architecture based unified hierarchical model for software reliability, performance, security and cache behavior prediction. We define a metric called the vulnerability index of a software component for quantifying its (in)security. We provide expressions for predicting the overall behavior of the system based on the characteristics of individual components, which also takes into account second order architectural effects for providing an accurate prediction. This approach also facilitates the identification of reliability, performance, security and cache performance bottlenecks. In addition we illustrate how the approach could be applied to software systems by case studies and also provide expressions to perform sensitivity analysis.
{"title":"Architecture based analysis of performance, reliability and security of software systems","authors":"V. Sharma, Kishor S. Trivedi","doi":"10.1145/1071021.1071046","DOIUrl":"https://doi.org/10.1145/1071021.1071046","url":null,"abstract":"With software systems becoming more complex, and handling diverse and critical applications, the need for their thorough evaluation has become ever more important at each phase of software development. With the prevalent use of component-based design, the software architecture as well as the behavior of the individual components of the system needs to be taken into account when evaluating it. In recent past a number of studies have focused on architecture based reliability estimation. But areas such as security and cache behavior still lack such an approach. In this paper we propose an architecture based unified hierarchical model for software reliability, performance, security and cache behavior prediction. We define a metric called the vulnerability index of a software component for quantifying its (in)security. We provide expressions for predicting the overall behavior of the system based on the characteristics of individual components, which also takes into account second order architectural effects for providing an accurate prediction. This approach also facilitates the identification of reliability, performance, security and cache performance bottlenecks. In addition we illustrate how the approach could be applied to software systems by case studies and also provide expressions to perform sensitivity analysis.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"458 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123049012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accounting is critical for information technology budgeting and chargeback. Traditional accounting in UNIX/Linux systems is known as process accounting, in which an accounting record is created when a process ends. System administrators typically aggregate accounting records based on individual users or groups. As Web and application servers along with databases handle requests and transactions for multiple entities in various Web applications and services, LPAR accounting and transaction accounting become increasingly critical for service providers in shared resource environments. In this paper we present the design and implementation of a J2EE accounting application for resource usage metering. For process accounting the resulting system can generate usage reports by projects, by groups, by users, by commands, or by a combination of these identifiers. For dynamically changing partitions it generates reports for shared resources including CPUs, memories, disks, file systems, and network interfaces. For transaction accounting it generates reports based on account classes provided that applications are instrumented. It is the first known J2EE accounting application for UNIX/Linux transaction accounting.
{"title":"A J2EE application for process accounting, LPAR accounting, and transaction accounting","authors":"C. Wu, William P. Horn","doi":"10.1145/1071021.1071049","DOIUrl":"https://doi.org/10.1145/1071021.1071049","url":null,"abstract":"Accounting is critical for information technology budgeting and chargeback. Traditional accounting in UNIX/Linux systems is known as process accounting, in which an accounting record is created when a process ends. System administrators typically aggregate accounting records based on individual users or groups. As Web and application servers along with databases handle requests and transactions for multiple entities in various Web applications and services, LPAR accounting and transaction accounting become increasingly critical for service providers in shared resource environments. In this paper we present the design and implementation of a J2EE accounting application for resource usage metering. For process accounting the resulting system can generate usage reports by projects, by groups, by users, by commands, or by a combination of these identifiers. For dynamically changing partitions it generates reports for shared resources including CPUs, memories, disks, file systems, and network interfaces. For transaction accounting it generates reports based on account classes provided that applications are instrumented. It is the first known J2EE accounting application for UNIX/Linux transaction accounting.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131526240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software performance based on performance models can be applied at early phases of the software development cycle to characterize the quantitative behavior of software systems. We propose an approach based on queueing networks models for performance prediction of software systems at the software architecture level, specified by UML. Starting from annotated UML Use Case, Activity and Deployment diagrams we derive a performance models based on multichain and multiclass Queueing Networks (QN). The UML model is annotated according to the UML Profile for Schedulability, Performance and Time Specification. The proposed algorithm translates the annotated UML specification into QN performance models, which can then be analyzed using standard solution techniques. Performance results are reported back at the software architecture level in the UML diagrams. As our approach can be fully automated and uses standard UML annotations, it can be integrated with other performance modeling approaches. Specifically, we discuss how this QN-based approach can be integrated with an existing simulation-based performance modeling tool.
{"title":"Performance evaluation of UML software architectures with multiclass Queueing Network models","authors":"S. Balsamo, M. Marzolla","doi":"10.1145/1071021.1071025","DOIUrl":"https://doi.org/10.1145/1071021.1071025","url":null,"abstract":"Software performance based on performance models can be applied at early phases of the software development cycle to characterize the quantitative behavior of software systems. We propose an approach based on queueing networks models for performance prediction of software systems at the software architecture level, specified by UML. Starting from annotated UML Use Case, Activity and Deployment diagrams we derive a performance models based on multichain and multiclass Queueing Networks (QN). The UML model is annotated according to the UML Profile for Schedulability, Performance and Time Specification. The proposed algorithm translates the annotated UML specification into QN performance models, which can then be analyzed using standard solution techniques. Performance results are reported back at the software architecture level in the UML diagrams. As our approach can be fully automated and uses standard UML annotations, it can be integrated with other performance modeling approaches. Specifically, we discuss how this QN-based approach can be integrated with an existing simulation-based performance modeling tool.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129370964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Resource pools are computing environments that offer virtualized access to shared resources. When used effectively they can align the use of capacity with business needs (flexibility), lower infrastructure costs (via resource sharing), and lower operating costs (via automation). This paper describes the Quartermaster capacity manager service for managing such pools. It implements a trace-based technique that models workload (e.g., application) resource demands, their corresponding resource allocations, and resource access quality of service. The primary advantages of the technique are its accuracy, generality, support for resource access qualities of service, and optimizing search method. We pose general capacity management questions for resource pools and explain how the capacity manager helps to address them in an automated manner. A case study demonstrates and validates the method on empirical data from an enterprise application. We show that the technique exploits much of the resource savings to be achieved from resource sharing and is significantly more accurate at estimating per-server required capacity than a benchmark method used in practice to manage a resource pool. Finally, we explain how the problems relate to other practices regarding enterprise capacity management and software performance engineering.
{"title":"A capacity management service for resource pools","authors":"J. Rolia, L. Cherkasova, M. Arlitt, A. Andrzejak","doi":"10.1145/1071021.1071047","DOIUrl":"https://doi.org/10.1145/1071021.1071047","url":null,"abstract":"Resource pools are computing environments that offer virtualized access to shared resources. When used effectively they can align the use of capacity with business needs (flexibility), lower infrastructure costs (via resource sharing), and lower operating costs (via automation). This paper describes the Quartermaster capacity manager service for managing such pools. It implements a trace-based technique that models workload (e.g., application) resource demands, their corresponding resource allocations, and resource access quality of service. The primary advantages of the technique are its accuracy, generality, support for resource access qualities of service, and optimizing search method. We pose general capacity management questions for resource pools and explain how the capacity manager helps to address them in an automated manner. A case study demonstrates and validates the method on empirical data from an enterprise application. We show that the technique exploits much of the resource savings to be achieved from resource sharing and is significantly more accurate at estimating per-server required capacity than a benchmark method used in practice to manage a resource pool. Finally, we explain how the problems relate to other practices regarding enterprise capacity management and software performance engineering.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126195213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper analyzes the performability of client-server applications that use a separate fault management architecture for monitoring and controlling of the status of the application software and hardware. The analysis considers the impact of the management components and connections, and their reliability, on performability. The approach combines minpath algorithms, Layered Queueing analysis and non-coherent fault tree analysis techniques for efficient computation of expected reward rate of the application.
{"title":"Computing the performability of layered distributed systems with a management architecture","authors":"O. Das, C. Woodside","doi":"10.1145/974044.974074","DOIUrl":"https://doi.org/10.1145/974044.974074","url":null,"abstract":"This paper analyzes the performability of client-server applications that use a separate fault management architecture for monitoring and controlling of the status of the application software and hardware. The analysis considers the impact of the management components and connections, and their reliability, on performability. The approach combines minpath algorithms, Layered Queueing analysis and non-coherent fault tree analysis techniques for efficient computation of expected reward rate of the application.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116947814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Barbosa, M. Costa, J. Almeida, Virgílio A. F. Almeida
Peer-to-peer, or simply P2P, systems have recently emerged as a popular paradigm for building distributed applications. One key aspect of the P2P system design is the mechanism used for content location. A number of different approaches are currently in use. In particular, the location algorithm used in Gnutella, a popular and extensively analyzed P2P file sharing application, is based on flooding of messages in the network, which results in significant processing overhead on the participant nodes and thus, poor performance.In this paper, we provide an extensive performance evaluation of alternative algorithms for content location and retrieval in P2P systems, in particular, the Freenet and Gnutella systems. We compare the original Freenet and Gnutella algorithms, a previously proposed interest-based algorithm and two new algorithms which also explore locality of interest among peers to efficiently allow content location. Unlike previous proposals, the new algorithms organize the peers into communities that share interests. Two peers are said to have common interest if they share some of the locally stored files.In order to evaluate the performance of these algorithms, we use a previously developed Freenet simulator and build a new Gnutella simulator, which includes several realistic system characteristics. We show that the new community-based algorithms improve the original Gnutella content location latency (and thus the system QoS) and system load by up to 31% and 30%, respectively. Our algorithms also reduce the average Freenet request and response path lengths by up to 39% and 31%, respectively. Furthermore, we show that, compared to the previously proposed interest-based algorithm, our new algorithms improve query latency by up to 27% without a significant increase in the load.
{"title":"Using locality of reference to improve performance of peer-to-peer applications","authors":"M. Barbosa, M. Costa, J. Almeida, Virgílio A. F. Almeida","doi":"10.1145/974044.974079","DOIUrl":"https://doi.org/10.1145/974044.974079","url":null,"abstract":"Peer-to-peer, or simply P2P, systems have recently emerged as a popular paradigm for building distributed applications. One key aspect of the P2P system design is the mechanism used for content location. A number of different approaches are currently in use. In particular, the location algorithm used in Gnutella, a popular and extensively analyzed P2P file sharing application, is based on flooding of messages in the network, which results in significant processing overhead on the participant nodes and thus, poor performance.In this paper, we provide an extensive performance evaluation of alternative algorithms for content location and retrieval in P2P systems, in particular, the Freenet and Gnutella systems. We compare the original Freenet and Gnutella algorithms, a previously proposed interest-based algorithm and two new algorithms which also explore locality of interest among peers to efficiently allow content location. Unlike previous proposals, the new algorithms organize the peers into communities that share interests. Two peers are said to have common interest if they share some of the locally stored files.In order to evaluate the performance of these algorithms, we use a previously developed Freenet simulator and build a new Gnutella simulator, which includes several realistic system characteristics. We show that the new community-based algorithms improve the original Gnutella content location latency (and thus the system QoS) and system load by up to 31% and 30%, respectively. Our algorithms also reduce the average Freenet request and response path lengths by up to 39% and 31%, respectively. Furthermore, we show that, compared to the previously proposed interest-based algorithm, our new algorithms improve query latency by up to 27% without a significant increase in the load.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127520267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Component-based programming is a methodology for designing software systems as assemblages of components with a low degree of coherence and a high degree of orthogonality. Decoupling and orthogonality, however, require coupling and assembling on the side of the component's client. This paper addresses performance problems that occur in the composition specifically of library components. We discuss the design and implementation of a composer, which assembles library components based on a classification of their declarative performance descriptions. Employing an off-the-shelf decision-tree procedure for selecting, and the C++ technique of traits for propagating the desired behavior throughout the whole library, our system allows for rapid performance predictions. It is applied to FFTL, an "STL-like" C++ library for the Fast Fourier Transform.
{"title":"Rapid performance prediction for library components","authors":"S. Schupp, Marcin Zalewski, Kyle Ross","doi":"10.1145/974044.974054","DOIUrl":"https://doi.org/10.1145/974044.974054","url":null,"abstract":"Component-based programming is a methodology for designing software systems as assemblages of components with a low degree of coherence and a high degree of orthogonality. Decoupling and orthogonality, however, require coupling and assembling on the side of the component's client. This paper addresses performance problems that occur in the composition specifically of library components. We discuss the design and implementation of a composer, which assembles library components based on a classification of their declarative performance descriptions. Employing an off-the-shelf decision-tree procedure for selecting, and the C++ technique of traits for propagating the desired behavior throughout the whole library, our system allows for rapid performance predictions. It is applied to FFTL, an \"STL-like\" C++ library for the Fast Fourier Transform.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126032356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rule-based software systems have become very common in telecommunications settings, particularly to monitor and control workflow management of large networks. At the same time, shorter deployment cycles are frequently necessary which has led to modifications being made to the rule base, without a full assessment of the impact of these new rules through extensive performance testing.An approach is presented that helps assess the performance of rule-based systems, in terms of its CPU utilization, by using modeling and analysis. A case study is presented applying this approach to a large rule-based system that is used to monitor a very large industrial telecommunications network.
{"title":"Estimating the CPU utilization of a rule-based system","authors":"Alberto Avritzer, Johannes P. Ros, E. Weyuker","doi":"10.1145/974044.974046","DOIUrl":"https://doi.org/10.1145/974044.974046","url":null,"abstract":"Rule-based software systems have become very common in telecommunications settings, particularly to monitor and control workflow management of large networks. At the same time, shorter deployment cycles are frequently necessary which has led to modifications being made to the rule base, without a full assessment of the impact of these new rules through extensive performance testing.An approach is presented that helps assess the performance of rule-based systems, in terms of its CPU utilization, by using modeling and analysis. A case study is presented applying this approach to a large rule-based system that is used to monitor a very large industrial telecommunications network.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115199635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}