Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306382
Hao Jia, Fengdi Shu, Ye Yang, Qi Li
Data transformation and attribute subset selection have been adopted in improving software defect/failure prediction methods. However, little consensus was achieved on their effectiveness. This paper reports a comparative study on these two kinds of techniques combined with four classifier and datasets from two projects. The results indicate that data transformation displays unobvious influence on improving the performance, while attribute subset selection methods show distinguishably inconsistent output. Besides, consistency across releases and discrepancy between the open-source and in-house maintenance projects in the evaluation of these methods are discussed.
{"title":"Data transformation and attribute subset selection: Do they help make differences in software failure prediction?","authors":"Hao Jia, Fengdi Shu, Ye Yang, Qi Li","doi":"10.1109/ICSM.2009.5306382","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306382","url":null,"abstract":"Data transformation and attribute subset selection have been adopted in improving software defect/failure prediction methods. However, little consensus was achieved on their effectiveness. This paper reports a comparative study on these two kinds of techniques combined with four classifier and datasets from two projects. The results indicate that data transformation displays unobvious influence on improving the performance, while attribute subset selection methods show distinguishably inconsistent output. Besides, consistency across releases and discrepancy between the open-source and in-house maintenance projects in the evaluation of these methods are discussed.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122866664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306345
M. Grechanik, Qing Xie, Chen Fu
Since manual black-box testing of GUI-based APplications (GAPs) is tedious and laborious, test engineers create test scripts to automate the testing process. These test scripts interact with GAPs by performing actions on their GUI objects. As GAPs evolve, testers should fix their corresponding test scripts so that they can reuse them to test successive releases of GAPs. Currently, there are two main modes of maintaining test scripts: tool-based and manual. In practice, there is no consensus what approach testers should use to maintain test scripts. Test managers make their decisions ad hoc, based on their personal experience and perceived benefits of the tool-based approach versus the manual. In this paper we describe a case study with forty five professional programmers and test engineers to experimentally assess the tool-based approach for maintaining GUI-directed test scripts versus the manual approach. Based on the results of our case study and considering the high cost of the programmers' time and the lower cost of the time of test engineers, and considering that programmers often modify GAP objects in the process of developing software we recommend organizations to supply programmers with testing tools that enable them to fix test scripts faster so that these scripts can unit test software. The other side of our recommendation is that experienced test engineers are likely to be as productive with the manual approach as with the tool-based approach, and we consequently recommend that organizations do not need to provide each tester with an expensive tool license to fix test scripts.
{"title":"Experimental assessment of manual versus tool-based maintenance of GUI-directed test scripts","authors":"M. Grechanik, Qing Xie, Chen Fu","doi":"10.1109/ICSM.2009.5306345","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306345","url":null,"abstract":"Since manual black-box testing of GUI-based APplications (GAPs) is tedious and laborious, test engineers create test scripts to automate the testing process. These test scripts interact with GAPs by performing actions on their GUI objects. As GAPs evolve, testers should fix their corresponding test scripts so that they can reuse them to test successive releases of GAPs. Currently, there are two main modes of maintaining test scripts: tool-based and manual. In practice, there is no consensus what approach testers should use to maintain test scripts. Test managers make their decisions ad hoc, based on their personal experience and perceived benefits of the tool-based approach versus the manual. In this paper we describe a case study with forty five professional programmers and test engineers to experimentally assess the tool-based approach for maintaining GUI-directed test scripts versus the manual approach. Based on the results of our case study and considering the high cost of the programmers' time and the lower cost of the time of test engineers, and considering that programmers often modify GAP objects in the process of developing software we recommend organizations to supply programmers with testing tools that enable them to fix test scripts faster so that these scripts can unit test software. The other side of our recommendation is that experienced test engineers are likely to be as productive with the manual approach as with the tool-based approach, and we consequently recommend that organizations do not need to provide each tester with an expensive tool license to fix test scripts.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127136371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306331
Z. Jiang, A. Hassan, Gilbert Hamann, P. Flora
The goal of a load test is to uncover functional and performance problems of a system under load. Performance problems refer to the situations where a system suffers from unexpectedly high response time or low throughput. It is difficult to detect performance problems in a load test due to the absence of formally-defined performance objectives and the large amount of data that must be examined. In this paper, we present an approach which automatically analyzes the execution logs of a load test for performance problems. We first derive the system's performance baseline from previous runs. Then we perform an in-depth performance comparison against the derived performance baseline. Case studies show that our approach produces few false alarms (with a precision of 77%) and scales well to large industrial systems.
{"title":"Automated performance analysis of load tests","authors":"Z. Jiang, A. Hassan, Gilbert Hamann, P. Flora","doi":"10.1109/ICSM.2009.5306331","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306331","url":null,"abstract":"The goal of a load test is to uncover functional and performance problems of a system under load. Performance problems refer to the situations where a system suffers from unexpectedly high response time or low throughput. It is difficult to detect performance problems in a load test due to the absence of formally-defined performance objectives and the large amount of data that must be examined. In this paper, we present an approach which automatically analyzes the execution logs of a load test for performance problems. We first derive the system's performance baseline from previous runs. Then we perform an in-depth performance comparison against the derived performance baseline. Case studies show that our approach produces few false alarms (with a precision of 77%) and scales well to large industrial systems.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134066007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306348
Lei Wang, Zheng Wang, Chen-Yang Yang, Li Zhang, Qiang Ye
In recent years, many graphs have turned out to be complex networks. This paper presents a novel method to study Linux kernel evolution — using complex networks to understand how Linux kernel modules evolve over time. After studying the node degree distribution and average path length of the call graphs corresponding to the kernel modules of 223 different versions (V1.1.0 to V2.4.35), we found that the call graphs of the file system and drivers module are scale-free small-world complex networks. In addition, both of the file system and drivers module exhibit very strong preferential attachment tendency. Finally, we proposed a generic method that could be used to find major structural changes that occur during the evolution of software systems.
{"title":"Linux kernels as complex networks: A novel method to study evolution","authors":"Lei Wang, Zheng Wang, Chen-Yang Yang, Li Zhang, Qiang Ye","doi":"10.1109/ICSM.2009.5306348","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306348","url":null,"abstract":"In recent years, many graphs have turned out to be complex networks. This paper presents a novel method to study Linux kernel evolution — using complex networks to understand how Linux kernel modules evolve over time. After studying the node degree distribution and average path length of the call graphs corresponding to the kernel modules of 223 different versions (V1.1.0 to V2.4.35), we found that the call graphs of the file system and drivers module are scale-free small-world complex networks. In addition, both of the file system and drivers module exhibit very strong preferential attachment tendency. Finally, we proposed a generic method that could be used to find major structural changes that occur during the evolution of software systems.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128142385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306297
Brian Chan, Ying Zou, A. Hassan, Anand Sinha
Field testing of a software application prior to general release is an important and essential quality assurance step. Field testing helps identify unforeseen problems. Extensive field testing leads to the reporting of a large number of problems which often overwhelm the allocated resources. Prior efforts focus primarily on studying the reported problems in isolation. We believe that a global view of the interdependencies between these problems will help in rapid understanding and resolution of reported problems. We present a visualization that highlights the commonalities between reported problems. The visualization helps developers identify two patterns that they can use to prioritize and focus their efforts. We demonstrate the applicability of our visualization through a case study on problems reported during field testing efforts for two releases of a large scale enterprise application.
{"title":"Visualizing the structure of field testing problems","authors":"Brian Chan, Ying Zou, A. Hassan, Anand Sinha","doi":"10.1109/ICSM.2009.5306297","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306297","url":null,"abstract":"Field testing of a software application prior to general release is an important and essential quality assurance step. Field testing helps identify unforeseen problems. Extensive field testing leads to the reporting of a large number of problems which often overwhelm the allocated resources. Prior efforts focus primarily on studying the reported problems in isolation. We believe that a global view of the interdependencies between these problems will help in rapid understanding and resolution of reported problems. We present a visualization that highlights the commonalities between reported problems. The visualization helps developers identify two patterns that they can use to prioritize and focus their efforts. We demonstrate the applicability of our visualization through a case study on problems reported during field testing efforts for two releases of a large scale enterprise application.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125235755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306272
Bram Adams
A build system breathes life into source code, as it configures and directs the construction of a software system from textual source code modules. Surprisingly, build languages and tools have not received considerable attention by academics and practitioners, making current build systems a mysterious and frustrating resource to work with. Our dissertation presents a conceptual framework with tool support to recover, analyze and refactor a build system. We demonstrate the applicability of our framework by analyzing the evolution of the Linux kernel build system and the introduction of AOSD technology in five legacy build systems. In all cases, we found that the build system is a complex software system of its own, trying to co-evolve in a synchronized way with the source code while working around shortcomings of the underlying build technology. Based on our findings, we hypothesize four conceptual reasons of co-evolution to guide future research in the area of build systems.
{"title":"Co-evolution of source code and the build system","authors":"Bram Adams","doi":"10.1109/ICSM.2009.5306272","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306272","url":null,"abstract":"A build system breathes life into source code, as it configures and directs the construction of a software system from textual source code modules. Surprisingly, build languages and tools have not received considerable attention by academics and practitioners, making current build systems a mysterious and frustrating resource to work with. Our dissertation presents a conceptual framework with tool support to recover, analyze and refactor a build system. We demonstrate the applicability of our framework by analyzing the evolution of the Linux kernel build system and the introduction of AOSD technology in five legacy build systems. In all cases, we found that the build system is a complex software system of its own, trying to co-evolve in a synchronized way with the source code while working around shortcomings of the underlying build technology. Based on our findings, we hypothesize four conceptual reasons of co-evolution to guide future research in the area of build systems.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127233268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306293
S. Strobl, Mario Bernhart, T. Grechenig, W. Kleinert
This paper describes the industrial experience in performing database reverse engineering on a large scale software reengineering project. The project in question deals with a highly heterogeneous in-house information system (IS) that has grown and evolved in numerous steps over the past three decades. This IS consists of a large number of loosely coupled single purpose systems with a database driven COBOL application at the centre, which has been adopted and enhanced to expose some functionality over the web. The software reengineering effort that provides the context for this paper deals with unifying these components and completely migrating the IS to an up-to-date and homogeneous platform. A database reverse engineering (DRE) process was tailored to suit the project environment consisting of almost 350 tables and 5600 columns. It aims at providing the developers of the software reengineering project with the necessary information about the more than thirty year old legacy databases to successfully perform the data migration. The application of the DRE process resulted in the development of a high-level categorization of the data model, a wiki based redocumentation structure and the essential data-access statistics.
{"title":"Digging deep: Software reengineering supported by database reverse engineering of a system with 30+ years of legacy","authors":"S. Strobl, Mario Bernhart, T. Grechenig, W. Kleinert","doi":"10.1109/ICSM.2009.5306293","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306293","url":null,"abstract":"This paper describes the industrial experience in performing database reverse engineering on a large scale software reengineering project. The project in question deals with a highly heterogeneous in-house information system (IS) that has grown and evolved in numerous steps over the past three decades. This IS consists of a large number of loosely coupled single purpose systems with a database driven COBOL application at the centre, which has been adopted and enhanced to expose some functionality over the web. The software reengineering effort that provides the context for this paper deals with unifying these components and completely migrating the IS to an up-to-date and homogeneous platform. A database reverse engineering (DRE) process was tailored to suit the project environment consisting of almost 350 tables and 5600 columns. It aims at providing the developers of the software reengineering project with the necessary information about the more than thirty year old legacy databases to successfully perform the data migration. The application of the DRE process resulted in the development of a high-level categorization of the data model, a wiki based redocumentation structure and the essential data-access statistics.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126744626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306336
M. D. Wit, A. Zaidman, A. Deursen
Code cloning is widely recognized as a threat to the maintainability of source code. As such, many clone detection and removal strategies have been proposed. However, some clones can often not be removed easily so other strategies, based on clone management need to be developed. In this paper we describe a clone management strategy based on dynamically inferring clone relations by monitoring clipboard activity. We introduce CloneBoard, our Eclipse plug-in implementation that is able to track live changes to clones and offers several resolution strategies for inconsistently modified clones. We perform a user study with seven subjects to assess the adequacy, usability and effectiveness of CloneBoard, the results of which show that developers actually see the added value of such a tool but have strict requirements with respect to its usability.
{"title":"Managing code clones using dynamic change tracking and resolution","authors":"M. D. Wit, A. Zaidman, A. Deursen","doi":"10.1109/ICSM.2009.5306336","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306336","url":null,"abstract":"Code cloning is widely recognized as a threat to the maintainability of source code. As such, many clone detection and removal strategies have been proposed. However, some clones can often not be removed easily so other strategies, based on clone management need to be developed. In this paper we describe a clone management strategy based on dynamically inferring clone relations by monitoring clipboard activity. We introduce CloneBoard, our Eclipse plug-in implementation that is able to track live changes to clones and offers several resolution strategies for inconsistently modified clones. We perform a user study with seven subjects to assess the adequacy, usability and effectiveness of CloneBoard, the results of which show that developers actually see the added value of such a tool but have strict requirements with respect to its usability.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"324 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116300268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306337
P. Anbalagan, M. Vouk
Existing studies on the maintenance of open source projects focus primarily on the analyses of the overall maintenance of the projects and less on specific categories like the corrective maintenance. This paper presents results from an empirical study of bug reports from an open source project, identifies user participation in the corrective maintenance process through bug reports, and constructs a model to predict the corrective maintenance effort for the project in terms of the time taken to correct faults. Our study focuses on 72482 bug reports from over nine releases of Ubuntu, a popular Linux distribution. We present three main results 1) 95% of the bug reports are corrected by people participating in groups of size ranging from 1 to 8 people, 2) there is a strong linear relationship (about 92%) between the the number of people participating in a bug report and the time taken to correct it, 3) a linear model can be used to predict the time taken to correct bug reports.
{"title":"On predicting the time taken to correct bug reports in open source projects","authors":"P. Anbalagan, M. Vouk","doi":"10.1109/ICSM.2009.5306337","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306337","url":null,"abstract":"Existing studies on the maintenance of open source projects focus primarily on the analyses of the overall maintenance of the projects and less on specific categories like the corrective maintenance. This paper presents results from an empirical study of bug reports from an open source project, identifies user participation in the corrective maintenance process through bug reports, and constructs a model to predict the corrective maintenance effort for the project in terms of the time taken to correct faults. Our study focuses on 72482 bug reports from over nine releases of Ubuntu, a popular Linux distribution. We present three main results 1) 95% of the bug reports are corrected by people participating in groups of size ranging from 1 to 8 people, 2) there is a strong linear relationship (about 92%) between the the number of people participating in a bug report and the time taken to correct it, 3) a linear model can be used to predict the time taken to correct bug reports.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114437088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-30DOI: 10.1109/ICSM.2009.5306302
David Röthlisberger, M. Harry, A. Villazón, Danilo Ansaloni, Walter Binder, Oscar Nierstrasz, Philippe Moret
Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.
{"title":"Augmenting static source views in IDEs with dynamic metrics","authors":"David Röthlisberger, M. Harry, A. Villazón, Danilo Ansaloni, Walter Binder, Oscar Nierstrasz, Philippe Moret","doi":"10.1109/ICSM.2009.5306302","DOIUrl":"https://doi.org/10.1109/ICSM.2009.5306302","url":null,"abstract":"Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.","PeriodicalId":247441,"journal":{"name":"2009 IEEE International Conference on Software Maintenance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129905103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}