The notion of technical debt attracts significant attention, especially in the context of reconciling architecture and agile development. However, most work on technical debt is still largely informal and if it provides a formalization it is often ad-hoc. In this paper, we provide a detailed, formal analysis of decision making on technical debt in development. Using this formalization, we show that optimal decision making is not effectively computable in real-world situations and provide several well-defined approximations that allow to handle the problem nevertheless in practical situations. Combining these approximations in a single method leads to a light-weight approach that can be effectively applied in iterative software development, including agile approaches.
{"title":"A formal approach to technical debt decision making","authors":"Klaus Schmid","doi":"10.1145/2465478.2465492","DOIUrl":"https://doi.org/10.1145/2465478.2465492","url":null,"abstract":"The notion of technical debt attracts significant attention, especially in the context of reconciling architecture and agile development. However, most work on technical debt is still largely informal and if it provides a formalization it is often ad-hoc. In this paper, we provide a detailed, formal analysis of decision making on technical debt in development. Using this formalization, we show that optimal decision making is not effectively computable in real-world situations and provide several well-defined approximations that allow to handle the problem nevertheless in practical situations. Combining these approximations in a single method leads to a light-weight approach that can be effectively applied in iterative software development, including agile approaches.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134048593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software design patterns are proven solutions for recurring design problems. Decisions on the use of a pattern in a software design form a specific but important class of design decisions. However, despite their importance, these design decisions are often mistaken and rarely documented. In our survey, about 90% of the participants confirmed to have experienced such problems. Therefore, we propose an approach that supports the appropriate use of design patterns and documentation of such decisions. The main idea is to create a pattern catalogue, where a pattern (as part of its catalogue entry) is annotated with general questions on the appropriateness of the use of the pattern. The envisioned benefits of this approach are a more appropriate use of design patterns, and documented design decisions on the use of patterns with positive effects on evolution. In this paper, we present the enriched pattern catalogue, and results of a survey with 21 software engineers as a validation of some entries of the pattern catalogue.
{"title":"On the appropriate rationale for using design patterns and pattern documentation","authors":"Zoya Durdik, Ralf H. Reussner","doi":"10.1145/2465478.2465491","DOIUrl":"https://doi.org/10.1145/2465478.2465491","url":null,"abstract":"Software design patterns are proven solutions for recurring design problems. Decisions on the use of a pattern in a software design form a specific but important class of design decisions. However, despite their importance, these design decisions are often mistaken and rarely documented. In our survey, about 90% of the participants confirmed to have experienced such problems. Therefore, we propose an approach that supports the appropriate use of design patterns and documentation of such decisions. The main idea is to create a pattern catalogue, where a pattern (as part of its catalogue entry) is annotated with general questions on the appropriateness of the use of the pattern. The envisioned benefits of this approach are a more appropriate use of design patterns, and documented design decisions on the use of patterns with positive effects on evolution. In this paper, we present the enriched pattern catalogue, and results of a survey with 21 software engineers as a validation of some entries of the pattern catalogue.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116360265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Dajsuren, M. Brand, Alexander Serebrenik, S. Roubtsov
In automotive industry, more and more complex electronics and software systems are being developed to enable the innovation and to decrease costs. Besides the complex multimedia, comfort, and safety systems of conventional vehicles, automotive companies are required to develop more and more complex engine, aftertreatment, and energy management systems for their (hybrid) electric vehicles to reduce fuel consumption and harmful emissions. MATLAB/Simulink is one of the most popular graphical modeling languages and a simulation tool for validating and testing control software systems. Due to the increasing complexity and size of Simulink models of automotive software systems, it has become a necessity to maintain the Simulink models. In this paper, we defined metrics for assessing the modularity of Simulink models. A Java tool developed to measure the defined metrics on Simulink models interfaces with a visualization tool to facilitate the maintenance tasks of the Simulink models. The modularity metrics is furthermore validated in two phases. In the first phase, the modularity measurement is validated against the experts evaluation of a system. In the second phase, we studied the relationship between metric values and number of faults. We have observed that high coupling metric values frequently correspond to number of faults. Modularity metrics will be extended to architectural quality metrics for automotive systems.
{"title":"Simulink models are also software: modularity assessment","authors":"Y. Dajsuren, M. Brand, Alexander Serebrenik, S. Roubtsov","doi":"10.1145/2465478.2465482","DOIUrl":"https://doi.org/10.1145/2465478.2465482","url":null,"abstract":"In automotive industry, more and more complex electronics and software systems are being developed to enable the innovation and to decrease costs. Besides the complex multimedia, comfort, and safety systems of conventional vehicles, automotive companies are required to develop more and more complex engine, aftertreatment, and energy management systems for their (hybrid) electric vehicles to reduce fuel consumption and harmful emissions. MATLAB/Simulink is one of the most popular graphical modeling languages and a simulation tool for validating and testing control software systems. Due to the increasing complexity and size of Simulink models of automotive software systems, it has become a necessity to maintain the Simulink models.\u0000 In this paper, we defined metrics for assessing the modularity of Simulink models. A Java tool developed to measure the defined metrics on Simulink models interfaces with a visualization tool to facilitate the maintenance tasks of the Simulink models. The modularity metrics is furthermore validated in two phases. In the first phase, the modularity measurement is validated against the experts evaluation of a system. In the second phase, we studied the relationship between metric values and number of faults. We have observed that high coupling metric values frequently correspond to number of faults. Modularity metrics will be extended to architectural quality metrics for automotive systems.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126349620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a semi-automated approach and framework for cost-aware recovery from service inconsistency arising due to unreliable service actions. A range of costs such as time are parameterised and modelled generically using cost algebras. With respect to a user-provided business specification, we distinguish end-state consistency, which must be achieved at service completion, from strong consistency, which may be momentarily violated. Our approach ensures optimal end-state consistency for services where action failure may lead to temporary violations of strong consistency or end-state consistency. Enterprises could not otherwise optimally and dynamically handle strong consistency violation, especially with respect to a variety of costs. Our approach provides quantitative analysis by defining a service model as an high-level message sequence chart (hMSC), annotating service actions with costs, then interpreting the model as a weighted (Mazurkiewicz) trace language, catering for costs in the presence of true concurrency. We devise a framework and method which checks such a model and ensures service end-state consistency optimally by concatenating the traces of recovery strategies (expressed by MSCs) from an enterprise service repository. We evaluate our approach using a popular online shop case study.
{"title":"Towards cost-aware service recovery","authors":"Terry G. Zhou, I. Peake, H. Schmidt","doi":"10.1145/2465478.2465484","DOIUrl":"https://doi.org/10.1145/2465478.2465484","url":null,"abstract":"We present a semi-automated approach and framework for cost-aware recovery from service inconsistency arising due to unreliable service actions. A range of costs such as time are parameterised and modelled generically using cost algebras. With respect to a user-provided business specification, we distinguish end-state consistency, which must be achieved at service completion, from strong consistency, which may be momentarily violated. Our approach ensures optimal end-state consistency for services where action failure may lead to temporary violations of strong consistency or end-state consistency. Enterprises could not otherwise optimally and dynamically handle strong consistency violation, especially with respect to a variety of costs. Our approach provides quantitative analysis by defining a service model as an high-level message sequence chart (hMSC), annotating service actions with costs, then interpreting the model as a weighted (Mazurkiewicz) trace language, catering for costs in the presence of true concurrency. We devise a framework and method which checks such a model and ensures service end-state consistency optimally by concatenating the traces of recovery strategies (expressed by MSCs) from an enterprise service repository. We evaluate our approach using a popular online shop case study.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"310 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122732058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Borjan Tchakaloff, S. Saudrais, Jean-Philippe Babau
Electric vehicles embed a low amount of energy, so their devices need to be managed efficiently to optimize the vehicle autonomy. A vehicle management is achieved by the embedded systems, modeled following the AUTOSAR standard. AUTOSAR covers most of the automotive concerns, but it lacks energy consumption and user-oriented Quality of Service models. This paper presents ORQA, a framework to model and manage the electric vehicle devices through energy consumption and user-oriented Quality of Service. At design time, the architects choose and tune the actual vehicle device models through their power requirements and, if appropriate, quality levels. The generated implementation is then embedded in the existing AUTOSAR models. Thus, at run-time, the vehicle's system is able to evaluate the global consumption of a trip and to propose the user a specific driving strategy. The optional devices are managed throughout the trip, based on the driver preferences. ORQA is illustrated with a classic use-case: a work to home trip.
{"title":"ORQA: modeling energy and quality of service within AUTOSAR models","authors":"Borjan Tchakaloff, S. Saudrais, Jean-Philippe Babau","doi":"10.1145/2465478.2465488","DOIUrl":"https://doi.org/10.1145/2465478.2465488","url":null,"abstract":"Electric vehicles embed a low amount of energy, so their devices need to be managed efficiently to optimize the vehicle autonomy. A vehicle management is achieved by the embedded systems, modeled following the AUTOSAR standard. AUTOSAR covers most of the automotive concerns, but it lacks energy consumption and user-oriented Quality of Service models. This paper presents ORQA, a framework to model and manage the electric vehicle devices through energy consumption and user-oriented Quality of Service. At design time, the architects choose and tune the actual vehicle device models through their power requirements and, if appropriate, quality levels.\u0000 The generated implementation is then embedded in the existing AUTOSAR models. Thus, at run-time, the vehicle's system is able to evaluate the global consumption of a trip and to propose the user a specific driving strategy. The optional devices are managed throughout the trip, based on the driver preferences. ORQA is illustrated with a classic use-case: a work to home trip.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121852558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Managing Quality of Service (QoS) of Service-based systems is a key challenge to produce systems that fulfill their requirements. Verifying the respect of a QoS contract in a system becomes more and more difficult as systems are more and more complex. Moreover, systems have to evolve in order to fulfil constantly changing requirements. As QoS properties are influenced by hidden factors such as connection rate or the system execution itself, determining the cause of a performance degradation is not mainstream. We propose in this paper to identify the causal relations to make explicit the hidden factors of influence. We more specifically focus on the consequences of system evolution with respect to QoS properties: using causal relations, we aim at predicting the possible overhead caused by an evolution. This paper shows through an example of Business Process how our evolution analysis helps to understand the effect of evolution on QoS property such as the Response Time. We show its efficiency by comparing the prediction with measured values.
{"title":"A causal model to predict the effect of business process evolution on quality of service","authors":"Alexandre Feugas, Sébastien Mosser, L. Duchien","doi":"10.1145/2465478.2465486","DOIUrl":"https://doi.org/10.1145/2465478.2465486","url":null,"abstract":"Managing Quality of Service (QoS) of Service-based systems is a key challenge to produce systems that fulfill their requirements. Verifying the respect of a QoS contract in a system becomes more and more difficult as systems are more and more complex. Moreover, systems have to evolve in order to fulfil constantly changing requirements. As QoS properties are influenced by hidden factors such as connection rate or the system execution itself, determining the cause of a performance degradation is not mainstream. We propose in this paper to identify the causal relations to make explicit the hidden factors of influence. We more specifically focus on the consequences of system evolution with respect to QoS properties: using causal relations, we aim at predicting the possible overhead caused by an evolution. This paper shows through an example of Business Process how our evolution analysis helps to understand the effect of evolution on QoS property such as the Response Time. We show its efficiency by comparing the prediction with measured values.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115206753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanfang Cai, Hanfei Wang, Sunny Wong, Linzhang Wang
In order to recover software architecture, various clustering techniques have been created to automatically partition a software system into meaningful subsystems. While these techniques have demonstrated their effectiveness, we observe that a key feature within most software systems has not been fully exploited: most well-designed systems follow strong architectural design rules that split the overall system into modules. These design rules are often manifested as special program constructs, such as shared data structures or abstract interfaces, which should not belong to any of the subordinate modules. We contribute a new perspective of architecture recovery based on this rationale, which enables the combination of design-rule-based clustering with other clustering techniques, as well as enabling the splitting of a large system into subsystems. We evaluated our approach both quantitatively and qualitatively, using both open source and real industrial software projects.
{"title":"Leveraging design rules to improve software architecture recovery","authors":"Yuanfang Cai, Hanfei Wang, Sunny Wong, Linzhang Wang","doi":"10.1145/2465478.2465480","DOIUrl":"https://doi.org/10.1145/2465478.2465480","url":null,"abstract":"In order to recover software architecture, various clustering techniques have been created to automatically partition a software system into meaningful subsystems. While these techniques have demonstrated their effectiveness, we observe that a key feature within most software systems has not been fully exploited: most well-designed systems follow strong architectural design rules that split the overall system into modules. These design rules are often manifested as special program constructs, such as shared data structures or abstract interfaces, which should not belong to any of the subordinate modules. We contribute a new perspective of architecture recovery based on this rationale, which enables the combination of design-rule-based clustering with other clustering techniques, as well as enabling the splitting of a large system into subsystems. We evaluated our approach both quantitatively and qualitatively, using both open source and real industrial software projects.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"112 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123506072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-adaptation allows continuously running software systems to operate in changing and uncertain contexts while meeting their requirements in a broad range of contexts, e.g., from low to high load situations. As a consequence, requirements for self-adaptive systems are more complex than requirements for static systems as they have to explicitly address properties of the self-adaptation layer. While approaches exist in the literature to capture this new type of requirements formally, their achievement cannot be analyzed in early design phases yet. In this paper, we apply RELAX to formally specify non-functional requirements for self-adaptive systems. We then apply our model-based SimuLizar approach for a semi-automatic analysis to test whether the self-adaptation layer ensures that these non-functional requirements are met. We evaluate our approach on the design of a proof-of-concept load balancer system. As this evaluation demonstrates, we can iteratively improve our system design by improving unsatisfactory self-adaption rules.
{"title":"Performance analysis of self-adaptive systems for requirements validation at design-time","authors":"Matthias Becker, Markus Luckey, Steffen Becker","doi":"10.1145/2465478.2465489","DOIUrl":"https://doi.org/10.1145/2465478.2465489","url":null,"abstract":"Self-adaptation allows continuously running software systems to operate in changing and uncertain contexts while meeting their requirements in a broad range of contexts, e.g., from low to high load situations. As a consequence, requirements for self-adaptive systems are more complex than requirements for static systems as they have to explicitly address properties of the self-adaptation layer. While approaches exist in the literature to capture this new type of requirements formally, their achievement cannot be analyzed in early design phases yet. In this paper, we apply RELAX to formally specify non-functional requirements for self-adaptive systems. We then apply our model-based SimuLizar approach for a semi-automatic analysis to test whether the self-adaptation layer ensures that these non-functional requirements are met. We evaluate our approach on the design of a proof-of-concept load balancer system. As this evaluation demonstrates, we can iteratively improve our system design by improving unsatisfactory self-adaption rules.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129448168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catia Trubiani, Indika Meedeniya, V. Cortellessa, A. Aleti, Lars Grunske
Performance analysis is often conducted before achieving full knowledge of a software system, in other words under a certain degree of uncertainty. Uncertainty is particularly critical in the performance domain when it relates to values of parameters such as workload, operational profile, resource demand of services, service time of hardware devices, etc. The goal of this paper is to explicitly consider uncertainty in the performance modelling and analysis process. In particular, we use probabilistic formulation of parameter uncertainties and present a Monte Carlo simulation-based approach to systematically assess the robustness of an architectural model despite its uncertainty. In case of unsatisfactory results, we introduce refactoring actions aimed at generating new software architectural models that better tolerate the uncertainty of parameters. The proposed approach is illustrated on a case study from the e-Health domain.
{"title":"Model-based performance analysis of software architectures under uncertainty","authors":"Catia Trubiani, Indika Meedeniya, V. Cortellessa, A. Aleti, Lars Grunske","doi":"10.1145/2465478.2465487","DOIUrl":"https://doi.org/10.1145/2465478.2465487","url":null,"abstract":"Performance analysis is often conducted before achieving full knowledge of a software system, in other words under a certain degree of uncertainty. Uncertainty is particularly critical in the performance domain when it relates to values of parameters such as workload, operational profile, resource demand of services, service time of hardware devices, etc. The goal of this paper is to explicitly consider uncertainty in the performance modelling and analysis process. In particular, we use probabilistic formulation of parameter uncertainties and present a Monte Carlo simulation-based approach to systematically assess the robustness of an architectural model despite its uncertainty. In case of unsatisfactory results, we introduce refactoring actions aimed at generating new software architectural models that better tolerate the uncertainty of parameters. The proposed approach is illustrated on a case study from the e-Health domain.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121126495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software product lines (SPL) are a well-known concept to efficiently develop product variants. However, migrating customised product copies to a product line is still a labour-intensive challenge due to the required comprehension of differences among the implementations and SPL design decisions. Most existing SPL approaches are focused on forward engineering. Only few aim to handle SPL evolution, but even those lack support of variability reverse engineering, which is necessary for migrating product copies to a product line. In this paper, we present our continued concept on using component architecture information to enhance a variability reverse engineering process. Including this information particularly improves the difference identification as well as the variation point analysis and -aggregation steps. We show how the concept can be applied by providing an illustrating example.
{"title":"Improving product copy consolidation by architecture-aware difference analysis","authors":"Benjamin Klatt, Martin Küster","doi":"10.1145/2465478.2465495","DOIUrl":"https://doi.org/10.1145/2465478.2465495","url":null,"abstract":"Software product lines (SPL) are a well-known concept to efficiently develop product variants. However, migrating customised product copies to a product line is still a labour-intensive challenge due to the required comprehension of differences among the implementations and SPL design decisions. Most existing SPL approaches are focused on forward engineering. Only few aim to handle SPL evolution, but even those lack support of variability reverse engineering, which is necessary for migrating product copies to a product line. In this paper, we present our continued concept on using component architecture information to enhance a variability reverse engineering process. Including this information particularly improves the difference identification as well as the variation point analysis and -aggregation steps. We show how the concept can be applied by providing an illustrating example.","PeriodicalId":110790,"journal":{"name":"International ACM SIGSOFT Conference on Quality of Software Architectures","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131339176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}