Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738494
G. Stark
This paper contains data demonstrating our recent experiences with measuring quality of evolving systems. Both process and product quality measures are discussed. This is an area in which more effective collaboration between practitioners and researchers would be of great value. We note that access to industrial software by researchers is often blocked by proprietary restrictions. When such restrictions can be eased, publication of analysis results is often hampered by the industrial owners and developers of the software. I believe that practice can be significantly aided by the data and results of broadly based research studies. Thus, closer collaboration in this area will benefit both communities.
{"title":"Modeling process and product quality during maintenance","authors":"G. Stark","doi":"10.1109/ICSM.1998.738494","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738494","url":null,"abstract":"This paper contains data demonstrating our recent experiences with measuring quality of evolving systems. Both process and product quality measures are discussed. This is an area in which more effective collaboration between practitioners and researchers would be of great value. We note that access to industrial software by researchers is often blocked by proprietary restrictions. When such restrictions can be eased, publication of analysis results is often hampered by the industrial owners and developers of the software. I believe that practice can be significantly aided by the data and results of broadly based research studies. Thus, closer collaboration in this area will benefit both communities.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133874933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738500
A. Cimitile, A. D. Lucia, G. D. Lucca
We present an experiment in identifying coarse-grained persistent objects in a legacy system of an Italian public organisation. Object methods are searched for at the program level driven by the minimisation of the coupling between objects. This strategy is useful in incremental migration projects requiring the identification of largely independent subsystems needing low re-engineering and decoupling costs to be first encapsulated in different wrappers and then selectively replaced. The aim of the experiment was to evaluate the feasibility of this approach when applied to large software systems. The work presented in this paper is part of the project PROGRESS, a research project on process and software re-engineering in Italian public organisations carried out by Italian universities and research centres.
{"title":"An experiment in identifying persistent objects in large systems","authors":"A. Cimitile, A. D. Lucia, G. D. Lucca","doi":"10.1109/ICSM.1998.738500","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738500","url":null,"abstract":"We present an experiment in identifying coarse-grained persistent objects in a legacy system of an Italian public organisation. Object methods are searched for at the program level driven by the minimisation of the coupling between objects. This strategy is useful in incremental migration projects requiring the identification of largely independent subsystems needing low re-engineering and decoupling costs to be first encapsulated in different wrappers and then selectively replaced. The aim of the experiment was to evaluate the feasibility of this approach when applied to large software systems. The work presented in this paper is part of the project PROGRESS, a research project on process and software re-engineering in Italian public organisations carried out by Italian universities and research centres.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"11 23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134467167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738489
M. Lindvall, Magnus Runesson
This empirical study analyzes changes in C++ source code which occurred between two releases of an industrial software product and compares them with entities and relations available in object-oriented modeling techniques. The comparison offers increased understanding of what changes can and cannot be described using such object models. The goals were to investigate if the object model in this particular project is either abstract and stable or detailed and sensitive to change, and whether or not changes made to the C++ source code are visible in the object model. Four metrics for characterization of change are formally defined and used, namely correctness, completeness, compliance, and visibility factor. The major finding is that even though many of the classes are changed, the majority of these changes turn out to be invisible in the object model. That is, changes made on the source code level are of a finer granularity than available in common object modeling concepts. This may explain why object models seem to be of little use in release-oriented development.
{"title":"The visibility of maintenance in object models: an empirical study","authors":"M. Lindvall, Magnus Runesson","doi":"10.1109/ICSM.1998.738489","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738489","url":null,"abstract":"This empirical study analyzes changes in C++ source code which occurred between two releases of an industrial software product and compares them with entities and relations available in object-oriented modeling techniques. The comparison offers increased understanding of what changes can and cannot be described using such object models. The goals were to investigate if the object model in this particular project is either abstract and stable or detailed and sensitive to change, and whether or not changes made to the C++ source code are visible in the object model. Four metrics for characterization of change are formally defined and used, namely correctness, completeness, compliance, and visibility factor. The major finding is that even though many of the classes are changed, the majority of these changes turn out to be invisible in the object model. That is, changes made on the source code level are of a finer granularity than available in common object modeling concepts. This may explain why object models seem to be of little use in release-oriented development.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133663952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738508
H. Gall, K. Hajek, M. Jazayeri
Code-based metrics such as coupling and cohesion are used to measure a system's structural complexity. But dealing with large systems-those consisting of several millions of lines-at the code level faces many problems. An alternative approach is to concentrate on the system's building blocks such as programs or modules as the unit of examination. We present an approach that uses information in a release history of a system to uncover logical dependencies and change patterns among modules. We have developed the approach by working with 20 releases of a large Telecommunications Switching System. We use release information such as version numbers of programs, modules, and subsystems together with change reports to discover common change behavior (i.e. change patterns) of modules. Our approach identifies logical coupling among modules in such a way that potential structural shortcomings can be identified and further examined, pointing to restructuring or reengineering opportunities.
{"title":"Detection of logical coupling based on product release history","authors":"H. Gall, K. Hajek, M. Jazayeri","doi":"10.1109/ICSM.1998.738508","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738508","url":null,"abstract":"Code-based metrics such as coupling and cohesion are used to measure a system's structural complexity. But dealing with large systems-those consisting of several millions of lines-at the code level faces many problems. An alternative approach is to concentrate on the system's building blocks such as programs or modules as the unit of examination. We present an approach that uses information in a release history of a system to uncover logical dependencies and change patterns among modules. We have developed the approach by working with 20 releases of a large Telecommunications Switching System. We use release information such as version numbers of programs, modules, and subsystems together with change reports to discover common change behavior (i.e. change patterns) of modules. Our approach identifies logical coupling among modules in such a way that potential structural shortcomings can be identified and further examined, pointing to restructuring or reengineering opportunities.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132703578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738521
Matthew A. Hutchins, K. Gallagher
Visual impact analysis is a software visualisation technique that lets software maintainers judge the impact of proposed changes and plan maintenance accordingly. An existing CASE tool uses a directed acyclic graph display derived from decomposition slicing of a program for visual impact analysis. In this paper, we analyse the graph display and show that it is semantically ambiguous and fails to show important information. We propose requirements for an improved display based on a definition of "interference" between variables in a maintenance context. The design for a new display is presented with a series of examples to illustrate its effectiveness. The display is focused on providing a straightforward method to analyse the impact of changes.
{"title":"Improving visual impact analysis","authors":"Matthew A. Hutchins, K. Gallagher","doi":"10.1109/ICSM.1998.738521","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738521","url":null,"abstract":"Visual impact analysis is a software visualisation technique that lets software maintainers judge the impact of proposed changes and plan maintenance accordingly. An existing CASE tool uses a directed acyclic graph display derived from decomposition slicing of a program for visual impact analysis. In this paper, we analyse the graph display and show that it is semantically ambiguous and fails to show important information. We propose requirements for an improved display based on a definition of \"interference\" between variables in a maintenance context. The design for a new display is presented with a series of examples to illustrate its effectiveness. The display is focused on providing a straightforward method to analyse the impact of changes.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"95 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114452992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738503
A. French, P. Layzell
With a growing demand for software systems and an increase in their complexity, software development and maintenance is no longer the preserve of individual designers and programmers, but is a team-based activity. Indeed, development and maintenance has always involved a wide variety of stakeholders (customers, designers, programmers, maintainers, end-users) making the need for communication and cooperation an inherent characteristic. Changes in support technology, economic factors and globalisation of software development and maintenance is increasingly resulting in the geographical separation of personnel. Where such distribution of personnel occurs, it is clearly important that there is high quality communication and cooperation. This paper presents the results and conclusions from a study of communication and cooperation practices on a range of distributed commercial software projects.
{"title":"A study of communication and cooperation in distributed software project teams","authors":"A. French, P. Layzell","doi":"10.1109/ICSM.1998.738503","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738503","url":null,"abstract":"With a growing demand for software systems and an increase in their complexity, software development and maintenance is no longer the preserve of individual designers and programmers, but is a team-based activity. Indeed, development and maintenance has always involved a wide variety of stakeholders (customers, designers, programmers, maintainers, end-users) making the need for communication and cooperation an inherent characteristic. Changes in support technology, economic factors and globalisation of software development and maintenance is increasingly resulting in the geographical separation of personnel. Where such distribution of personnel occurs, it is clearly important that there is high quality communication and cooperation. This paper presents the results and conclusions from a study of communication and cooperation practices on a range of distributed commercial software projects.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128521512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738528
I. Baxter, A. Yahin, L. D. Moura, Marcelo Sant'Anna, Lorraine Bier
Existing research suggests that a considerable fraction (5-10%) of the source code of large scale computer programs is duplicate code ("clones"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.
{"title":"Clone detection using abstract syntax trees","authors":"I. Baxter, A. Yahin, L. D. Moura, Marcelo Sant'Anna, Lorraine Bier","doi":"10.1109/ICSM.1998.738528","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738528","url":null,"abstract":"Existing research suggests that a considerable fraction (5-10%) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133190683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738524
P. Tonella
Good software design is characterized by low coupling between modules and high cohesion inside each module. This is obtained by encapsulating the details about the internal structure of data and exporting only public functions with a clean interface. For programming languages such as C, which offer little support for encapsulation, code analysis tools may help in assessing and improving the access to data structures. In this paper a new representation of the accesses of functions to dynamic locations, called the O-A diagram, is proposed. By isolating meaningful groups of functions working on common dynamic data, such a diagram can be used to evaluate the encapsulation in a program and to drive possible interventions to improve it. Experimental results suggest that the aggregations identified by the O-A diagram are actually cohesive functions operating on a shared data structure. The results are useful in themselves, by providing the programmer with information about the organization of the accesses to dynamic memory. In addition the O-A diagram permits highlighting violations of encapsulation, so that proper restructuring actions can be performed.
{"title":"Using the O-A diagram to encapsulate dynamic memory access","authors":"P. Tonella","doi":"10.1109/ICSM.1998.738524","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738524","url":null,"abstract":"Good software design is characterized by low coupling between modules and high cohesion inside each module. This is obtained by encapsulating the details about the internal structure of data and exporting only public functions with a clean interface. For programming languages such as C, which offer little support for encapsulation, code analysis tools may help in assessing and improving the access to data structures. In this paper a new representation of the accesses of functions to dynamic locations, called the O-A diagram, is proposed. By isolating meaningful groups of functions working on common dynamic data, such a diagram can be used to evaluate the encapsulation in a program and to drive possible interventions to improve it. Experimental results suggest that the aggregations identified by the O-A diagram are actually cohesive functions operating on a shared data structure. The results are useful in themselves, by providing the programmer with information about the organization of the accesses to dynamic memory. In addition the O-A diagram permits highlighting violations of encapsulation, so that proper restructuring actions can be performed.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"76 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123159378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738516
K. Bennett
Program transformations have been advocated as a method for accomplishing reverse engineering. The hypothesis is that the original source code can be progressively transformed into alternative forms, but with the same semantics. At the end of the process, an equivalent program is acquired, but one which is much easier to understand and more maintainable. We have been undertaking an extensive programme of research over twelve years into the design and development of transformations for the support of software maintenance. The paper very briefly explains the theory, practice and tool support for transformational systems, but does not present new theoretical results. The main results are on an analysis of the strengths and weaknesses of the approach, based on experience with case studies and industrial applications. The evaluation framework used (called DERE) is that presented in Bennett and Munro (1998). It is hoped that the results will be of benefit to industry, who might be considering using the technology; and to other researchers, interested in addressing the open problems. The overall conclusion is that transformations can help in the bottom-up analysis and manipulation of source code at approximately the 3GL level, and have proved successful in code migration, but need to be complemented by other top-down techniques to be useful at higher levels of abstraction or in more ambitious re-engineering projects.
{"title":"Do program transformations help reverse engineering?","authors":"K. Bennett","doi":"10.1109/ICSM.1998.738516","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738516","url":null,"abstract":"Program transformations have been advocated as a method for accomplishing reverse engineering. The hypothesis is that the original source code can be progressively transformed into alternative forms, but with the same semantics. At the end of the process, an equivalent program is acquired, but one which is much easier to understand and more maintainable. We have been undertaking an extensive programme of research over twelve years into the design and development of transformations for the support of software maintenance. The paper very briefly explains the theory, practice and tool support for transformational systems, but does not present new theoretical results. The main results are on an analysis of the strengths and weaknesses of the approach, based on experience with case studies and industrial applications. The evaluation framework used (called DERE) is that presented in Bennett and Munro (1998). It is hoped that the results will be of benefit to industry, who might be considering using the technology; and to other researchers, interested in addressing the open problems. The overall conclusion is that transformations can help in the bottom-up analysis and manipulation of source code at approximately the 3GL level, and have proved successful in code migration, but need to be complemented by other top-down techniques to be useful at higher levels of abstraction or in more ambitious re-engineering projects.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125027877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-16DOI: 10.1109/ICSM.1998.738484
M. C. Ohlsson, C. Wohlin
Software systems are often getting older than expected, and it is a challenge to try to make sure that they grow old gracefully. This implies that methods are needed to ensure that system components are possible to maintain. In this paper, the need to investigate, classify and study software components is emphasized. A classification method is proposed. It is based on classifying the software components into green, yellow and red components. The classification scheme is complemented with a discussion of suitable models to identify problematic components. The scheme and the models are illustrated in a minor case study to highlight the opportunities. The long term objective of the work is to define methods, models and metrics which are suitable to use in order to identify software components which have to be taken care of through either tailored processes (e.g. additional focus on verification and validation) or reengineering. The case study indicates that the long term objective is realistic and worthwhile.
{"title":"Identification of green, yellow and red legacy components","authors":"M. C. Ohlsson, C. Wohlin","doi":"10.1109/ICSM.1998.738484","DOIUrl":"https://doi.org/10.1109/ICSM.1998.738484","url":null,"abstract":"Software systems are often getting older than expected, and it is a challenge to try to make sure that they grow old gracefully. This implies that methods are needed to ensure that system components are possible to maintain. In this paper, the need to investigate, classify and study software components is emphasized. A classification method is proposed. It is based on classifying the software components into green, yellow and red components. The classification scheme is complemented with a discussion of suitable models to identify problematic components. The scheme and the models are illustrated in a minor case study to highlight the opportunities. The long term objective of the work is to define methods, models and metrics which are suitable to use in order to identify software components which have to be taken care of through either tailored processes (e.g. additional focus on verification and validation) or reengineering. The case study indicates that the long term objective is realistic and worthwhile.","PeriodicalId":271895,"journal":{"name":"Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129662928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}