Pub Date : 2018-09-01DOI: 10.1109/icsme.2018.00005
On behalf of the entire conference committee, it is our great pleasure to welcome you to Madrid for ICSME 2018, the 34th IEEE International Conference on Software Maintenance and Evolution. ICSME is the premier international forum for researchers and practitioners from academia, industry, and government to present, discuss, and debate the most recent ideas, experiences, and challenges in software maintenance and evolution.
{"title":"Message from the General Chair and the Program Co-Chairs","authors":"","doi":"10.1109/icsme.2018.00005","DOIUrl":"https://doi.org/10.1109/icsme.2018.00005","url":null,"abstract":"On behalf of the entire conference committee, it is our great pleasure to welcome you to Madrid for ICSME 2018, the 34th IEEE International Conference on Software Maintenance and Evolution. ICSME is the premier international forum for researchers and practitioners from academia, industry, and government to present, discuss, and debate the most recent ideas, experiences, and challenges in software maintenance and evolution.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75290565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICSME.2018.00015
Ziming Zhu, L. Jiao, Xiong Xu
In the area of software testing, search-based software testing (SBST) and dynamic symbolic execution (DSE) are two efficient testing techniques for test cases generation. However, both of the two approaches have their own drawbacks: The efficiency of SBST depends on the guidance of the fitness landscape. When the fitness landscape has some plateaus with no gradient for directing search process, SBST may degenerate into random testing. DSE relies on the capability of constraint solvers. It may struggle to generate test cases with constraints that are difficult to be solved. In this paper, we combine the strengths of both techniques. SBST is used to help DSE for solving difficult constraints and DSE is used to improve the efficiency and capability of SBST. Evolvability metric is introduced for measuring when the software is not suitable for SBST. A novel switch mechanism based on the evolvability metric between SBST and DSE is proposed in this paper to help to choose the proper technique at the proper time. Experiments on several benchmarks reveal the promising results of our proposal.
{"title":"Combining Search-Based Testing and Dynamic Symbolic Execution by Evolvability Metric","authors":"Ziming Zhu, L. Jiao, Xiong Xu","doi":"10.1109/ICSME.2018.00015","DOIUrl":"https://doi.org/10.1109/ICSME.2018.00015","url":null,"abstract":"In the area of software testing, search-based software testing (SBST) and dynamic symbolic execution (DSE) are two efficient testing techniques for test cases generation. However, both of the two approaches have their own drawbacks: The efficiency of SBST depends on the guidance of the fitness landscape. When the fitness landscape has some plateaus with no gradient for directing search process, SBST may degenerate into random testing. DSE relies on the capability of constraint solvers. It may struggle to generate test cases with constraints that are difficult to be solved. In this paper, we combine the strengths of both techniques. SBST is used to help DSE for solving difficult constraints and DSE is used to improve the efficiency and capability of SBST. Evolvability metric is introduced for measuring when the software is not suitable for SBST. A novel switch mechanism based on the evolvability metric between SBST and DSE is proposed in this paper to help to choose the proper technique at the proper time. Experiments on several benchmarks reveal the promising results of our proposal.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"40 1","pages":"59-68"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78529398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICSME.2018.00087
Matthieu Jimenez, Maxime Cordy, Yves Le Traon, Mike Papadakis
Natural language processing techniques, in particular n-gram models, have been applied successfully to facilitate a number of software engineering tasks. However, in our related ICSME '18 paper, we have shown that the conclusions of a study can drastically change with respect to how the code is tokenized and how the used n-gram model is parameterized. These choices are thus of utmost importance, and one must carefully make them. To show this and allow the community to benefit from our work, we have developed TUNA (TUning Naturalness-based Analysis), a Java software artifact to perform naturalness-based analyses of source code. To the best of our knowledge, TUNA is the first open-source, end-to-end toolchain to carry out source code analyses based on naturalness.
{"title":"TUNA: TUning Naturalness-Based Analysis","authors":"Matthieu Jimenez, Maxime Cordy, Yves Le Traon, Mike Papadakis","doi":"10.1109/ICSME.2018.00087","DOIUrl":"https://doi.org/10.1109/ICSME.2018.00087","url":null,"abstract":"Natural language processing techniques, in particular n-gram models, have been applied successfully to facilitate a number of software engineering tasks. However, in our related ICSME '18 paper, we have shown that the conclusions of a study can drastically change with respect to how the code is tokenized and how the used n-gram model is parameterized. These choices are thus of utmost importance, and one must carefully make them. To show this and allow the community to benefit from our work, we have developed TUNA (TUning Naturalness-based Analysis), a Java software artifact to perform naturalness-based analyses of source code. To the best of our knowledge, TUNA is the first open-source, end-to-end toolchain to carry out source code analyses based on naturalness.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"4 1","pages":"715-715"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75958014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICSME.2018.00034
Hobum Kwon, Juwon Ahn, Sun-Tae Choi, Jakub Siewierski, Piotr Kosko, Piotr Szydelko
Development and maintenance of software plat-form APIs are challenging because new APIs are constantly added in new software platforms. Furthermore, software plat-form API development requires a lot of stakeholders to work together on tight release schedules. Application developers use platform's APIs to create their applications and therefore providing a well-defined and comprehensive set of platform APIs may be the most basic requirement for software platforms. To provide such APIs, API usability should be secured and API backward compatibility should be guaranteed in subsequent platform re-leases. In these circumstances, sharing lessons learned from multiple years of experience of platform API development, mainte-nance, and releases using an integrated API development process can benefit API researchers and practitioners who have similar needs to create or adopt API development process for their projects. In this paper we share an API development and mainte-nance process for multi-device Tizen software platform, which we call the Tizen API Change Request (ACR) process. The process has been used among various Tizen API stakeholders for several years of Tizen platform and SDK releases to keep API usability and compatibility high. We believe the process can be further applied to various software platforms and projects to systematically develop and maintain their APIs.
{"title":"An Experience Report of the API Evolution and Maintenance for Software Platforms","authors":"Hobum Kwon, Juwon Ahn, Sun-Tae Choi, Jakub Siewierski, Piotr Kosko, Piotr Szydelko","doi":"10.1109/ICSME.2018.00034","DOIUrl":"https://doi.org/10.1109/ICSME.2018.00034","url":null,"abstract":"Development and maintenance of software plat-form APIs are challenging because new APIs are constantly added in new software platforms. Furthermore, software plat-form API development requires a lot of stakeholders to work together on tight release schedules. Application developers use platform's APIs to create their applications and therefore providing a well-defined and comprehensive set of platform APIs may be the most basic requirement for software platforms. To provide such APIs, API usability should be secured and API backward compatibility should be guaranteed in subsequent platform re-leases. In these circumstances, sharing lessons learned from multiple years of experience of platform API development, mainte-nance, and releases using an integrated API development process can benefit API researchers and practitioners who have similar needs to create or adopt API development process for their projects. In this paper we share an API development and mainte-nance process for multi-device Tizen software platform, which we call the Tizen API Change Request (ACR) process. The process has been used among various Tizen API stakeholders for several years of Tizen platform and SDK releases to keep API usability and compatibility high. We believe the process can be further applied to various software platforms and projects to systematically develop and maintain their APIs.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"3 1","pages":"587-590"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86388115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICSME.2018.00093
Matúš Sulír
An inherently abstract nature of source code makes programs difficult to understand. In our research, we designed three techniques utilizing concrete values of variables and other expressions during program execution. RuntimeSearch is a debugger extension searching for a given string in all expressions at runtime. DynamiDoc generates documentation sentences containing examples of arguments, return values and state changes. RuntimeSamp augments source code lines in the IDE (integrated development environment) with sample variable values. In this post-doctoral article, we briefly describe these three approaches and related motivational studies, surveys and evaluations. We also reflect on the PhD study, providing advice for current students. Finally, short-term and long-term future work is described.
{"title":"Integrating Runtime Values with Source Code to Facilitate Program Comprehension","authors":"Matúš Sulír","doi":"10.1109/ICSME.2018.00093","DOIUrl":"https://doi.org/10.1109/ICSME.2018.00093","url":null,"abstract":"An inherently abstract nature of source code makes programs difficult to understand. In our research, we designed three techniques utilizing concrete values of variables and other expressions during program execution. RuntimeSearch is a debugger extension searching for a given string in all expressions at runtime. DynamiDoc generates documentation sentences containing examples of arguments, return values and state changes. RuntimeSamp augments source code lines in the IDE (integrated development environment) with sample variable values. In this post-doctoral article, we briefly describe these three approaches and related motivational studies, surveys and evaluations. We also reflect on the PhD study, providing advice for current students. Finally, short-term and long-term future work is described.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"1 1","pages":"743-748"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90358367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICSME.2018.00092
Simone Romano
Dead code is a bad smell. It is conjectured to be harmful and it appears to be also a common phenomenon in software systems. Surprisingly, dead code has received little empirical attention from the software engineering research community. This post-doctoral track paper shows the main results of a multi-study investigation into dead code with an overarching goal to study when and why developers introduce dead code, how they perceive and cope with it, and whether dead code is harmful. This investigation is composed of semi-structured interviews with software professionals and four experiments at the University of Basilicata and the College of William & Mary. The results suggest that it is worth studying dead code not only in maintenance and evolution phases, where the results suggest that its presence is detrimental to developers, but also in design and implementation phases, where source code is born dead because developers consider dead code as a sort of reuse means. The results also foster the development of tools for detecting dead code. In this respect, two approaches were proposed and then implemented in two prototypes of supporting tool.
{"title":"Dead Code","authors":"Simone Romano","doi":"10.1109/ICSME.2018.00092","DOIUrl":"https://doi.org/10.1109/ICSME.2018.00092","url":null,"abstract":"Dead code is a bad smell. It is conjectured to be harmful and it appears to be also a common phenomenon in software systems. Surprisingly, dead code has received little empirical attention from the software engineering research community. This post-doctoral track paper shows the main results of a multi-study investigation into dead code with an overarching goal to study when and why developers introduce dead code, how they perceive and cope with it, and whether dead code is harmful. This investigation is composed of semi-structured interviews with software professionals and four experiments at the University of Basilicata and the College of William & Mary. The results suggest that it is worth studying dead code not only in maintenance and evolution phases, where the results suggest that its presence is detrimental to developers, but also in design and implementation phases, where source code is born dead because developers consider dead code as a sort of reuse means. The results also foster the development of tools for detecting dead code. In this respect, two approaches were proposed and then implemented in two prototypes of supporting tool.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"40 1","pages":"737-742"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90515330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICSME.2018.00017
Jevgenija Pantiuchina, Michele Lanza, G. Bavota
Code quality metrics are widely used to identify design flaws (e.g., code smells) as well as to act as fitness functions for refactoring recommenders. Both these applications imply a strong assumption: quality metrics are able to assess code quality as perceived by developers. Indeed, code smell detectors and refactoring recommenders should be able to identify design flaws/recommend refactorings that are meaningful from the developer's point-of-view. While such an assumption might look reasonable, there is limited empirical evidence supporting it. We aim at bridging this gap by empirically investigating whether quality metrics are able to capture code quality improvement as perceived by developers. While previous studies surveyed developers to investigate whether metrics align with their perception of code quality, we mine commits in which developers clearly state in the commit message their aim of improving one of four quality attributes: cohesion, coupling, code readability, and code complexity. Then, we use state-of-the-art metrics to assess the change brought by each of those commits to the specific quality attribute it targets. We found that, more often than not the considered quality metrics were not able to capture the quality improvement as perceived by developers (e.g., the developer states "improved the cohesion of class C", but no quality metric captures such an improvement).
{"title":"Improving Code: The (Mis) Perception of Quality Metrics","authors":"Jevgenija Pantiuchina, Michele Lanza, G. Bavota","doi":"10.1109/ICSME.2018.00017","DOIUrl":"https://doi.org/10.1109/ICSME.2018.00017","url":null,"abstract":"Code quality metrics are widely used to identify design flaws (e.g., code smells) as well as to act as fitness functions for refactoring recommenders. Both these applications imply a strong assumption: quality metrics are able to assess code quality as perceived by developers. Indeed, code smell detectors and refactoring recommenders should be able to identify design flaws/recommend refactorings that are meaningful from the developer's point-of-view. While such an assumption might look reasonable, there is limited empirical evidence supporting it. We aim at bridging this gap by empirically investigating whether quality metrics are able to capture code quality improvement as perceived by developers. While previous studies surveyed developers to investigate whether metrics align with their perception of code quality, we mine commits in which developers clearly state in the commit message their aim of improving one of four quality attributes: cohesion, coupling, code readability, and code complexity. Then, we use state-of-the-art metrics to assess the change brought by each of those commits to the specific quality attribute it targets. We found that, more often than not the considered quality metrics were not able to capture the quality improvement as perceived by developers (e.g., the developer states \"improved the cohesion of class C\", but no quality metric captures such an improvement).","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"1 1","pages":"80-91"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89762525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICSME.2018.00079
W. Fracz, J. Dajda
These days, despite dynamic development of the software industry there is still no proven and accurate assessment method of developers' performance. Current solutions are based on limited set of factors that can be easily measured by managers, such as the number of hours worked, issues closed, or lines of code written. However, there is more than that: developers write code of better or worse quality, they perform code reviews and code refactorings. To answer these needs we have created the Code Review Analyzer (CRA) tool that uses information gathered in code review platforms to assess developers based on their work style. Among others, it collects information on commits frequency, number of code review rejections and code reviews performed. This information is used to calculate developers performance in a continuous manner and to introduce gamification techniques into the team space by providing developers with their ranking and awarding them with various achievement badges. Afterwards, the tool was experimentally evaluated in order to prove the evaluation accuracy but also to verify the motivational impact of the gamification techniques. The CRA tool demonstration can be seen at https://youtu.be/dUFFxCeH-ok.
{"title":"Developers' Game: A Preliminary Study Concerning a Tool for Automated Developers Assessment","authors":"W. Fracz, J. Dajda","doi":"10.1109/ICSME.2018.00079","DOIUrl":"https://doi.org/10.1109/ICSME.2018.00079","url":null,"abstract":"These days, despite dynamic development of the software industry there is still no proven and accurate assessment method of developers' performance. Current solutions are based on limited set of factors that can be easily measured by managers, such as the number of hours worked, issues closed, or lines of code written. However, there is more than that: developers write code of better or worse quality, they perform code reviews and code refactorings. To answer these needs we have created the Code Review Analyzer (CRA) tool that uses information gathered in code review platforms to assess developers based on their work style. Among others, it collects information on commits frequency, number of code review rejections and code reviews performed. This information is used to calculate developers performance in a continuous manner and to introduce gamification techniques into the team space by providing developers with their ranking and awarding them with various achievement badges. Afterwards, the tool was experimentally evaluated in order to prove the evaluation accuracy but also to verify the motivational impact of the gamification techniques. The CRA tool demonstration can be seen at https://youtu.be/dUFFxCeH-ok.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"16 1","pages":"695-699"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81414770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile apps are born to work in an environment with ever-changing network connectivity, random hardware interruption, unanticipated task switches, etc. However, such interference cases are often oblivious in traditional mobile testing but happen frequently and sophisticatedly in the field, causing various robustness, responsiveness and consistency problems. In this paper, we propose JazzDroid to introduce interference to mobile testing. JazzDroid adopts a gray-box approach to instrument apps at binary level such that interference logic is inlined with app execution and can be triggered to effectively affect normal execution. Then, JazzDroid repeatedly orchestrates the instrumented app through app developers' existing tests and continuously randomizes interference on the fly to reveal possible faulty executions. Upon discovering problems, JazzDroid generates a test script with the user inputs from developers' tests and the interference injected for developers to reproduce the problems. At a high level, JazzDroid can be seamlessly integrated into app developers' testing procedures, detecting more problems from existing tests. We implement JazzDroid to function on unmodified apps directly from app markets and interface with de facto industrial testing toolchain. JazzDroid improves mobile testing by discovering 6x more problems, including crashes, functional bugs, UI consistency issues and common bug patterns that fail numerous apps.
{"title":"Reproducible Interference-Aware Mobile Testing","authors":"Weilun Xiong, Shihao Chen, Yuning Zhang, Mingyuan Xia, Zhengwei Qi","doi":"10.1109/ICSME.2018.00013","DOIUrl":"https://doi.org/10.1109/ICSME.2018.00013","url":null,"abstract":"Mobile apps are born to work in an environment with ever-changing network connectivity, random hardware interruption, unanticipated task switches, etc. However, such interference cases are often oblivious in traditional mobile testing but happen frequently and sophisticatedly in the field, causing various robustness, responsiveness and consistency problems. In this paper, we propose JazzDroid to introduce interference to mobile testing. JazzDroid adopts a gray-box approach to instrument apps at binary level such that interference logic is inlined with app execution and can be triggered to effectively affect normal execution. Then, JazzDroid repeatedly orchestrates the instrumented app through app developers' existing tests and continuously randomizes interference on the fly to reveal possible faulty executions. Upon discovering problems, JazzDroid generates a test script with the user inputs from developers' tests and the interference injected for developers to reproduce the problems. At a high level, JazzDroid can be seamlessly integrated into app developers' testing procedures, detecting more problems from existing tests. We implement JazzDroid to function on unmodified apps directly from app markets and interface with de facto industrial testing toolchain. JazzDroid improves mobile testing by discovering 6x more problems, including crashes, functional bugs, UI consistency issues and common bug patterns that fail numerous apps.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"31 1","pages":"36-47"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82417488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/icsme.2018.00001
{"title":"Title Page i","authors":"","doi":"10.1109/icsme.2018.00001","DOIUrl":"https://doi.org/10.1109/icsme.2018.00001","url":null,"abstract":"","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77125343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}