Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756773
Yeong-Tae Song, D. Huynh
Object-oriented programming has been considered a most promising method in program development and maintenance. An important feature of object-oriented programs (OOPs) is their reusability which can be achieved through the inheritance of classes or reusable components. We propose an algorithm to decompose OOPs with respect to some variables or objects of interest using the forward dynamic slicing technique. The algorithm recursively decomposes constructors and member functions with respect to the specified variables in a slicing criterion. It is an extension of the interprocedural program slicing algorithm by Song and Huynh (1998) which is based on the forward slicing technique by Korel and Yalamanchili (1994). The algorithm analyzes message passings and parameter passings and constructs dynamic object relationship diagrams (DORD). As results, the algorithm produces not only the statement level slice (called traditional slice), but also the DORD that shows the relationships among the objects with respect to the specified variables in a slicing criterion.
{"title":"Forward dynamic object-oriented program slicing","authors":"Yeong-Tae Song, D. Huynh","doi":"10.1109/ASSET.1999.756773","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756773","url":null,"abstract":"Object-oriented programming has been considered a most promising method in program development and maintenance. An important feature of object-oriented programs (OOPs) is their reusability which can be achieved through the inheritance of classes or reusable components. We propose an algorithm to decompose OOPs with respect to some variables or objects of interest using the forward dynamic slicing technique. The algorithm recursively decomposes constructors and member functions with respect to the specified variables in a slicing criterion. It is an extension of the interprocedural program slicing algorithm by Song and Huynh (1998) which is based on the forward slicing technique by Korel and Yalamanchili (1994). The algorithm analyzes message passings and parameter passings and constructs dynamic object relationship diagrams (DORD). As results, the algorithm produces not only the statement level slice (called traditional slice), but also the DORD that shows the relationships among the objects with respect to the specified variables in a slicing criterion.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115652217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756769
W. E. Wong, S. Gokhale, J. R. Horgan, Kishor S. Trivedi
An important step towards effective software maintenance is to locate the code relevant to a particular feature. We report a study applying an execution slice-based technique to a reliability and performance evaluator to identify the code which is unique to a feature, or is common to a group of features. Supported by tools called ATAC and /spl chi/Vue, the program features in the source code can be tracked down to files, functions, lines of code, decisions, and then c- or p-uses. Our study suggests that the technique can provide software programmers and maintainers with a good starting point for quick program understanding.
{"title":"Locating program features using execution slices","authors":"W. E. Wong, S. Gokhale, J. R. Horgan, Kishor S. Trivedi","doi":"10.1109/ASSET.1999.756769","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756769","url":null,"abstract":"An important step towards effective software maintenance is to locate the code relevant to a particular feature. We report a study applying an execution slice-based technique to a reliability and performance evaluator to identify the code which is unique to a feature, or is common to a group of features. Supported by tools called ATAC and /spl chi/Vue, the program features in the source code can be tracked down to files, functions, lines of code, decisions, and then c- or p-uses. Our study suggests that the technique can provide software programmers and maintainers with a good starting point for quick program understanding.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120960033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756757
N. Caudy, L. McFearin
Windows NT has become a widespread, general purpose operating system and is seeing increased use in real-time applications. However Windows NT was nor designed for real-time operation and, in such environments, the all too common Windows NT system stop event (crash or "Blue Screen of Death") can prove catastrophic. Consequently three commercial real-time extensions are available for Windows NT: Hyperkernel from Imagination Systems, INtime from RadiSys, and RTX from VenturCom. These extensions add determinism for real-time applications along with the capability for real-time applications to survive a Windows NT stop event. Each solution has a different architecture and our rests revealed that each solution has a different response to Windows NT crashes. These extensions differ in the types of stop events which can be survived the code required to survive a stop event, I/O capabilities after a stop event, and real-time performance during a stop event. However, all of these solutions allow some level of protection until the user can initiate an orderly shutdown at an appropriate time.
Windows NT已经成为一个广泛使用的通用操作系统,并且在实时应用程序中的使用也越来越多。然而,Windows NT并不是为实时操作而设计的,在这样的环境中,常见的Windows NT系统停止事件(崩溃或“蓝屏死机”)可能是灾难性的。因此,有三个商业实时扩展可用于Windows NT:来自Imagination Systems的Hyperkernel,来自RadiSys的INtime和来自VenturCom的RTX。这些扩展为实时应用程序增加了确定性,并使实时应用程序能够在Windows NT停止事件中存活下来。每个解决方案都有不同的体系结构,我们的研究表明,每个解决方案对Windows NT崩溃都有不同的响应。这些扩展在停止事件类型、停止事件后的I/O能力以及停止事件期间的实时性能方面有所不同。然而,所有这些解决方案都允许一定程度的保护,直到用户可以在适当的时候启动有序关闭。
{"title":"Can real-time extensions survive a Windows NT crash?","authors":"N. Caudy, L. McFearin","doi":"10.1109/ASSET.1999.756757","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756757","url":null,"abstract":"Windows NT has become a widespread, general purpose operating system and is seeing increased use in real-time applications. However Windows NT was nor designed for real-time operation and, in such environments, the all too common Windows NT system stop event (crash or \"Blue Screen of Death\") can prove catastrophic. Consequently three commercial real-time extensions are available for Windows NT: Hyperkernel from Imagination Systems, INtime from RadiSys, and RTX from VenturCom. These extensions add determinism for real-time applications along with the capability for real-time applications to survive a Windows NT stop event. Each solution has a different architecture and our rests revealed that each solution has a different response to Windows NT crashes. These extensions differ in the types of stop events which can be survived the code required to survive a stop event, I/O capabilities after a stop event, and real-time performance during a stop event. However, all of these solutions allow some level of protection until the user can initiate an orderly shutdown at an appropriate time.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114321396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756770
W. Everett
This paper describes an approach to analyzing software reliability using component analysis. It walks through a 6-step procedure for performing software component reliability analysis. The analysis can begin prior to testing the software and can help in selecting testing strategies. It uses the Extended Execution Time (EET) reliability growth model at the software component level. The paper describes how to estimate model parameters from characteristics of the software components and characteristics of how test cases and operational usage stress the software components. The order in which test cases are run is used in combining component models to arrive at a composite reliability growth model of the software for the testing period. The paper walks through an example illustrating the effects on reliability growth of: selecting test cases based on an operational profile versus selecting them based on uniform coverage of test cases; and incremental delivery of software components to system test. The paper contrasts the described approach to other approaches currently used to analyze software reliability growth during testing. The analysis can be done using commercial data analysis programs.
{"title":"Software component reliability analysis","authors":"W. Everett","doi":"10.1109/ASSET.1999.756770","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756770","url":null,"abstract":"This paper describes an approach to analyzing software reliability using component analysis. It walks through a 6-step procedure for performing software component reliability analysis. The analysis can begin prior to testing the software and can help in selecting testing strategies. It uses the Extended Execution Time (EET) reliability growth model at the software component level. The paper describes how to estimate model parameters from characteristics of the software components and characteristics of how test cases and operational usage stress the software components. The order in which test cases are run is used in combining component models to arrive at a composite reliability growth model of the software for the testing period. The paper walks through an example illustrating the effects on reliability growth of: selecting test cases based on an operational profile versus selecting them based on uniform coverage of test cases; and incremental delivery of software components to system test. The paper contrasts the described approach to other approaches currently used to analyze software reliability growth during testing. The analysis can be done using commercial data analysis programs.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134192459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756758
I. Chen, Ding-Chau Wang, Chih-Ping Chu
In this paper, we develop a modeling method based on stochastic Petri nets (SPN) to allow user-perceived measures in voting-based replicated systems to be estimated. The merit of our approach is that user-arrival, maintenance, and node/link-failure or -repair processes are fully decoupled, thus allowing us to remove some unnecessary modeling assumptions and also to keep track of states in which the system is unavailable to users from the user's perspective. We apply our method to contrast user-perceived availability and performance measures under dynamic and static voting algorithms in a 3-node, fully-connected network and discover that (a) for user-perceived availability, the conditions under which static voting is better than dynamic voting, or vice versa, are largely determined by the user workload; (b) for user-perceived response time, static voting is always better than dynamic voting. We give some physical interpretation of the analysis result. Our method is generic in nature and can be applied to analyzing other voting algorithms or network structures for replicated data management.
{"title":"User-perceived availability and response-time in voting-based replicated systems: a case study","authors":"I. Chen, Ding-Chau Wang, Chih-Ping Chu","doi":"10.1109/ASSET.1999.756758","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756758","url":null,"abstract":"In this paper, we develop a modeling method based on stochastic Petri nets (SPN) to allow user-perceived measures in voting-based replicated systems to be estimated. The merit of our approach is that user-arrival, maintenance, and node/link-failure or -repair processes are fully decoupled, thus allowing us to remove some unnecessary modeling assumptions and also to keep track of states in which the system is unavailable to users from the user's perspective. We apply our method to contrast user-perceived availability and performance measures under dynamic and static voting algorithms in a 3-node, fully-connected network and discover that (a) for user-perceived availability, the conditions under which static voting is better than dynamic voting, or vice versa, are largely determined by the user workload; (b) for user-perceived response time, static voting is always better than dynamic voting. We give some physical interpretation of the analysis result. Our method is generic in nature and can be applied to analyzing other voting algorithms or network structures for replicated data management.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130763039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756782
Quan T. Tran, L. Chung
This paper presents the NFR-Assistant, a prototype CASE tool, which assists the software developer in systematically achieving quality requirements. The tool allows for explicit representation of non-functional requirements, consideration of design alternatives, analysis of design trade-offs, rationalization of a design choice and evaluation of the level of achievement of NFRs. As one of the first tools, the particular prototype presented in this paper is a Java applet rendition of a subset of the NFR-Assistant. The paper illustrates the use of the prototype for the development of an architectural design for no other than a distributed version of the NFR-Assistant itself.
{"title":"NFR-Assistant: tool support for achieving quality","authors":"Quan T. Tran, L. Chung","doi":"10.1109/ASSET.1999.756782","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756782","url":null,"abstract":"This paper presents the NFR-Assistant, a prototype CASE tool, which assists the software developer in systematically achieving quality requirements. The tool allows for explicit representation of non-functional requirements, consideration of design alternatives, analysis of design trade-offs, rationalization of a design choice and evaluation of the level of achievement of NFRs. As one of the first tools, the particular prototype presented in this paper is a Java applet rendition of a subset of the NFR-Assistant. The paper illustrates the use of the prototype for the development of an architectural design for no other than a distributed version of the NFR-Assistant itself.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"4 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131437417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756747
G. Trajkovski
Current research treats the ATM paradigm as a generalization of multitude of concepts in the telecommunication area. On the other hand, fuzzy logic concepts are being applied in various engineering areas. The paper describes an application of fuzzy logic inference engine to buffer occupancy control in ATM networks. A model of statistical multiplexers of video sources for ATM networks is presented. We investigate the impact that the multiplexer's buffer capacity and the varying numbers of video sources have on the probability of cell loss. A conventional two-threshold and a fuzzy control mechanisms are considered and their performance is compared. The features of these two controllers are observed for different update intervals and different propagation delays of the control signal.
{"title":"A comparison of two buffer occupancy control algorithms in ATM networks","authors":"G. Trajkovski","doi":"10.1109/ASSET.1999.756747","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756747","url":null,"abstract":"Current research treats the ATM paradigm as a generalization of multitude of concepts in the telecommunication area. On the other hand, fuzzy logic concepts are being applied in various engineering areas. The paper describes an application of fuzzy logic inference engine to buffer occupancy control in ATM networks. A model of statistical multiplexers of video sources for ATM networks is presented. We investigate the impact that the multiplexer's buffer capacity and the varying numbers of video sources have on the probability of cell loss. A conventional two-threshold and a fuzzy control mechanisms are considered and their performance is compared. The features of these two controllers are observed for different update intervals and different propagation delays of the control signal.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124350053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756771
K. Kanoun
Accurate software reliability evaluation requires the collection of comprehensive and consistent data sets on the ongoing software project. We define some measurements that can be performed on the software during its development or in operation to help evaluating and managing its reliability, and discuss organizational and feedback aspects. Since failure data are difficult to collect, the raw data set may include extraneous data: validation is thus needed before processing. Emphasis is put on near-term objectives, implying timely and efficient feedback for the ongoing project. However, as reliability measurement is to be considered in the first step in a software reliability improvement program, we report some success stories in which improvement programs have increased productivity and reliability at no extra cost or even with cost reduction.
{"title":"Measurements for managing software reliability","authors":"K. Kanoun","doi":"10.1109/ASSET.1999.756771","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756771","url":null,"abstract":"Accurate software reliability evaluation requires the collection of comprehensive and consistent data sets on the ongoing software project. We define some measurements that can be performed on the software during its development or in operation to help evaluating and managing its reliability, and discuss organizational and feedback aspects. Since failure data are difficult to collect, the raw data set may include extraneous data: validation is thus needed before processing. Emphasis is put on near-term objectives, implying timely and efficient feedback for the ongoing project. However, as reliability measurement is to be considered in the first step in a software reliability improvement program, we report some success stories in which improvement programs have increased productivity and reliability at no extra cost or even with cost reduction.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116388655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756748
A. Bhargava, B. Bhargava
The performance of network and communication software is a major concern for making the electronic commerce applications in a distributed environment a success. The quality of service in electronic commerce can generically be measured by convenience, privacy/security, response time, throughput, reliability, timeliness, accuracy, and precision. We present the quality of service parameters, software architecture used in e-commerce, experimental data about transaction processing in the Internet, characteristics of digital library databases used in e-commerce and communication measurements for such data. We present a summary of e-commerce companies and their status and give an example of electronic trading as an application.
{"title":"Measurements and quality of service issues in electronic commerce software","authors":"A. Bhargava, B. Bhargava","doi":"10.1109/ASSET.1999.756748","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756748","url":null,"abstract":"The performance of network and communication software is a major concern for making the electronic commerce applications in a distributed environment a success. The quality of service in electronic commerce can generically be measured by convenience, privacy/security, response time, throughput, reliability, timeliness, accuracy, and precision. We present the quality of service parameters, software architecture used in e-commerce, experimental data about transaction processing in the Internet, characteristics of digital library databases used in e-commerce and communication measurements for such data. We present a summary of e-commerce companies and their status and give an example of electronic trading as an application.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128828634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-03-24DOI: 10.1109/ASSET.1999.756765
Aguinaldo M. Filho, J. Saito, I. Gimenes
This paper presents software supervision as a technique for indirect software reliability improvement of telecom systems. Software supervision consists of monitoring both the inputs and outputs of a target system and checking them against the target system's specification. All discrepancies between observed sequences of signals and the target system's specification are reported as failures. The paper aims at showing how to use Statecharts as a formal technique to specify supervision models to telecom systems software specified in SDL (Specification and Description Language). Moreover, the Statecharts-based Supervisor Modeling, called SSM, has been developed, which allows the derivation of supervision models.
{"title":"Real-time supervisor modeling for telecom systems","authors":"Aguinaldo M. Filho, J. Saito, I. Gimenes","doi":"10.1109/ASSET.1999.756765","DOIUrl":"https://doi.org/10.1109/ASSET.1999.756765","url":null,"abstract":"This paper presents software supervision as a technique for indirect software reliability improvement of telecom systems. Software supervision consists of monitoring both the inputs and outputs of a target system and checking them against the target system's specification. All discrepancies between observed sequences of signals and the target system's specification are reported as failures. The paper aims at showing how to use Statecharts as a formal technique to specify supervision models to telecom systems software specified in SDL (Specification and Description Language). Moreover, the Statecharts-based Supervisor Modeling, called SSM, has been developed, which allows the derivation of supervision models.","PeriodicalId":340666,"journal":{"name":"Proceedings 1999 IEEE Symposium on Application-Specific Systems and Software Engineering and Technology. ASSET'99 (Cat. No.PR00122)","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134579643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}