Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00110
Xiaoqin Fu, Haipeng Cai
Classification using machine learning has been a major class of defense solutions against malware. Yet in the presence of a large and growing number of learning-based malware detection techniques for Android, malicious apps keep breaking out, with an increasing momentum, in various Android app markets. In this context, we ask the question "what is it that makes new and emerging malware slip through such a great collection of detection techniques?". Intuitively, performance deterioration of malware detectors could be a main cause—trained on older samples, they are increasingly unable to capture new malware. To understand the question, this work sets off to investigate the deterioration problem in four state-of-the-art Android malware detectors. We confirmed our hypothesis that these existing solutions do deteriorate largely and rapidly over time. We also propose a new classification approach that is built on the results of a longitudinal characterization study of Android apps with a focus on their dynamic behaviors. We evaluated this new approach against the four existing detectors and demonstrated significant advantages of our new solution. The main lesson learned is that studying app evolution provides a promising avenue for long-span malware detection.
{"title":"On the Deterioration of Learning-Based Malware Detectors for Android","authors":"Xiaoqin Fu, Haipeng Cai","doi":"10.1109/ICSE-Companion.2019.00110","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00110","url":null,"abstract":"Classification using machine learning has been a major class of defense solutions against malware. Yet in the presence of a large and growing number of learning-based malware detection techniques for Android, malicious apps keep breaking out, with an increasing momentum, in various Android app markets. In this context, we ask the question \"what is it that makes new and emerging malware slip through such a great collection of detection techniques?\". Intuitively, performance deterioration of malware detectors could be a main cause—trained on older samples, they are increasingly unable to capture new malware. To understand the question, this work sets off to investigate the deterioration problem in four state-of-the-art Android malware detectors. We confirmed our hypothesis that these existing solutions do deteriorate largely and rapidly over time. We also propose a new classification approach that is built on the results of a longitudinal characterization study of Android apps with a focus on their dynamic behaviors. We evaluated this new approach against the four existing detectors and demonstrated significant advantages of our new solution. The main lesson learned is that studying app evolution provides a promising avenue for long-span malware detection.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114204983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00031
Tri Huynh, Alessio Gambi, G. Fraser
Autonomous driving carries the promise to drastically reduce car accidents, but recently reported fatal crashes involving self-driving cars suggest that the self-driving car software should be tested more thoroughly. For addressing this need, we introduce AC3R (Automatic Crash Constructor from Crash Report) which elaborates police reports to automatically recreate car crashes in a simulated environment that can be used for testing self-driving car software in critical situations. AC3R enables developers to quickly generate relevant test cases from the massive historical dataset of recorded car crashes. We demonstrate how AC3R can generate simulations of different car crashes and report the findings of a large user study which concluded that AC3R simulations are accurate. A video illustrating AC3R in action is available at: https://youtu.be/V708fDG_ux8
{"title":"AC3R: Automatically Reconstructing Car Crashes from Police Reports","authors":"Tri Huynh, Alessio Gambi, G. Fraser","doi":"10.1109/ICSE-Companion.2019.00031","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00031","url":null,"abstract":"Autonomous driving carries the promise to drastically reduce car accidents, but recently reported fatal crashes involving self-driving cars suggest that the self-driving car software should be tested more thoroughly. For addressing this need, we introduce AC3R (Automatic Crash Constructor from Crash Report) which elaborates police reports to automatically recreate car crashes in a simulated environment that can be used for testing self-driving car software in critical situations. AC3R enables developers to quickly generate relevant test cases from the massive historical dataset of recorded car crashes. We demonstrate how AC3R can generate simulations of different car crashes and report the findings of a large user study which concluded that AC3R simulations are accurate. A video illustrating AC3R in action is available at: https://youtu.be/V708fDG_ux8","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117031192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00118
J. Jaffar, Sangharatna Godboley, Rasool Maghareh
We present a new method for automated test case generation based on symbolic execution and a custom process of interpolation. The method first identifies program execution paths in order to define a corresponding set of test inputs. It then annotates the program with assertions so as to identify feasible and infeasible cases, the former of which are processed to produce the desired test inputs. The main contribution is that performing symbolic execution using a custom form of interpolation significantly prunes the search space. Our main result is that the set of Modified Condition/Decision Coverage (MC/DC) test cases we produce is optimal.
{"title":"Optimal MC/DC Test Case Generation","authors":"J. Jaffar, Sangharatna Godboley, Rasool Maghareh","doi":"10.1109/ICSE-Companion.2019.00118","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00118","url":null,"abstract":"We present a new method for automated test case generation based on symbolic execution and a custom process of interpolation. The method first identifies program execution paths in order to define a corresponding set of test inputs. It then annotates the program with assertions so as to identify feasible and infeasible cases, the former of which are processed to produce the desired test inputs. The main contribution is that performing symbolic execution using a custom form of interpolation significantly prunes the search space. Our main result is that the set of Modified Condition/Decision Coverage (MC/DC) test cases we produce is optimal.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127401178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00061
Yue Li
Smart contracts have been widely used on Ethereum to enable business services across various application domains. However, they are prone to different forms of security attacks due to the dynamic and non-deterministic blockchain runtime environment. In this work, we highlighted a general miner-side type of exploit, called concurrency exploit, which attacks smart contracts via generating malicious transaction sequences. Moreover, we designed a systematic algorithm to automatically detect such exploits. In our preliminary evaluation, our approach managed to identify real vulnerabilities that cannot be detected by other tools in the literature.
{"title":"Finding Concurrency Exploits on Smart Contracts","authors":"Yue Li","doi":"10.1109/ICSE-Companion.2019.00061","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00061","url":null,"abstract":"Smart contracts have been widely used on Ethereum to enable business services across various application domains. However, they are prone to different forms of security attacks due to the dynamic and non-deterministic blockchain runtime environment. In this work, we highlighted a general miner-side type of exploit, called concurrency exploit, which attacks smart contracts via generating malicious transaction sequences. Moreover, we designed a systematic algorithm to automatically detect such exploits. In our preliminary evaluation, our approach managed to identify real vulnerabilities that cannot be detected by other tools in the literature.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126426276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00075
Alexander Trautsch
Leveraging and acting on the results of static analysis is a well-known best practice. Static analysis also is an integral part of software quality assurance which is highlighted by the inclusion of static analysis results in software quality models like ColumbusQM and Quamoco. Although there are studies that explore if tools for static analysis are used and how they are configured, few publications explore the longitudinal effects of acting on static analysis results on software and on the evolution of software. Especially effects on quality criteria, e.g., software quality metrics, defects, or readability are missing. With our research, we will bridge this gap and measure the effects of static analysis on software quality evolution. We will measure the effect the removal of code that generates static analysis warnings has on software quality metrics. Furthermore, we will measure long term effects on external quality attributes, e.g., reported issues and defects. Finally, we want to predict false positives of static analysis warnings by training predictive models on our collected data.
{"title":"Effects of Automated Static Analysis Tools: A Multidimensional View on Quality Evolution","authors":"Alexander Trautsch","doi":"10.1109/ICSE-Companion.2019.00075","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00075","url":null,"abstract":"Leveraging and acting on the results of static analysis is a well-known best practice. Static analysis also is an integral part of software quality assurance which is highlighted by the inclusion of static analysis results in software quality models like ColumbusQM and Quamoco. Although there are studies that explore if tools for static analysis are used and how they are configured, few publications explore the longitudinal effects of acting on static analysis results on software and on the evolution of software. Especially effects on quality criteria, e.g., software quality metrics, defects, or readability are missing. With our research, we will bridge this gap and measure the effects of static analysis on software quality evolution. We will measure the effect the removal of code that generates static analysis warnings has on software quality metrics. Furthermore, we will measure long term effects on external quality attributes, e.g., reported issues and defects. Finally, we want to predict false positives of static analysis warnings by training predictive models on our collected data.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130054178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00062
Zhenhao Li
Software logs are widely used by developers to assist in various tasks. Despite the importance of logs, prior studies show that there is no industrial standard on how to write logging statements. Recent research on logs often only considers the appropriateness of a log as an individual item (e.g., one single logging statement); while logs are typically analyzed in tandem. In this paper, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. Such duplications in the text message are potential indications of logging code smells, which may affect developers' understanding of the dynamic view of the system. We manually studied over 3K duplicate logging statements and their surrounding code in four large-scale open source systems and uncovered five patterns of duplicate logging code smells. For each instance of the problematic code smell, we contact developers in order to verify our manual study result. We integrated our manual study result and developers' feedback into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the manually studied systems and two additional systems. In total, combining the results of DLFinder and our manual analysis, DLFinder is able to detect over 85% of the instances which were reported to developers and then fixed.
{"title":"Characterizing and Detecting Duplicate Logging Code Smells","authors":"Zhenhao Li","doi":"10.1109/ICSE-Companion.2019.00062","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00062","url":null,"abstract":"Software logs are widely used by developers to assist in various tasks. Despite the importance of logs, prior studies show that there is no industrial standard on how to write logging statements. Recent research on logs often only considers the appropriateness of a log as an individual item (e.g., one single logging statement); while logs are typically analyzed in tandem. In this paper, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. Such duplications in the text message are potential indications of logging code smells, which may affect developers' understanding of the dynamic view of the system. We manually studied over 3K duplicate logging statements and their surrounding code in four large-scale open source systems and uncovered five patterns of duplicate logging code smells. For each instance of the problematic code smell, we contact developers in order to verify our manual study result. We integrated our manual study result and developers' feedback into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the manually studied systems and two additional systems. In total, combining the results of DLFinder and our manual analysis, DLFinder is able to detect over 85% of the instances which were reported to developers and then fixed.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00135
Zeliang Kan, Haoyu Wang, Lei Wu, Yao Guo, Guoai Xu
In this paper, we propose an automated approach to facilitate the deobfuscation of Android native binary code. Specifically, given a native binary obfuscated by Obfuscator-LLVM (the most popular native code obfuscator), our deobfuscation system is capable of recovering the original Control Flow Graph. To the best of our knowledge, it is the first work that aims to tackle the problem. We have applied our system in different scenarios, and the experimental results demonstrate the effectiveness of our system based on generic similarity comparison metrics.
{"title":"Deobfuscating Android Native Binary Code","authors":"Zeliang Kan, Haoyu Wang, Lei Wu, Yao Guo, Guoai Xu","doi":"10.1109/ICSE-Companion.2019.00135","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00135","url":null,"abstract":"In this paper, we propose an automated approach to facilitate the deobfuscation of Android native binary code. Specifically, given a native binary obfuscated by Obfuscator-LLVM (the most popular native code obfuscator), our deobfuscation system is capable of recovering the original Control Flow Graph. To the best of our knowledge, it is the first work that aims to tackle the problem. We have applied our system in different scenarios, and the experimental results demonstrate the effectiveness of our system based on generic similarity comparison metrics.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117304583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00100
Erica Weilemann
Nowadays, software is developed in teams. But how should teams be put together in order to build a high quality team? This study shows how the different roles in a software development team - project leader, requirements engineer, architect/ designer, and developer/tester/maintainer - should be staffed with respect to HEXACO personality traits in order to form a high quality team. We conducted a qualitative analysis by leading 12 semistructured interviews with interviewees who work in the software engineering sector and have working experience of at least 2 years. We followed a Grounded Theory approach to derive personality traits and link them to software engineering roles. Our study shows that different personality profiles are indeed beneficial for the different roles. A project leader e.g. should have a more pronounced Agreeableness domain whereas the Extraversion domain should be pronounced with a requirements engineer. Our results should support the composition of software engineering teams with the aim to successfully build high quality teams.
{"title":"A Winning Team - What Personality Has To Do With Software Engineering","authors":"Erica Weilemann","doi":"10.1109/ICSE-Companion.2019.00100","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00100","url":null,"abstract":"Nowadays, software is developed in teams. But how should teams be put together in order to build a high quality team? This study shows how the different roles in a software development team - project leader, requirements engineer, architect/ designer, and developer/tester/maintainer - should be staffed with respect to HEXACO personality traits in order to form a high quality team. We conducted a qualitative analysis by leading 12 semistructured interviews with interviewees who work in the software engineering sector and have working experience of at least 2 years. We followed a Grounded Theory approach to derive personality traits and link them to software engineering roles. Our study shows that different personality profiles are indeed beneficial for the different roles. A project leader e.g. should have a more pronounced Agreeableness domain whereas the Extraversion domain should be pronounced with a requirements engineer. Our results should support the composition of software engineering teams with the aim to successfully build high quality teams.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116333070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00067
Banghu Yin
Static analysis by abstract interpretation is one of the widely used automatic approaches to program verification, due to its soundness guarantee and scalability. A key challenge for abstract interpretation with convex and linear abstract domain is verifying complex programs with disjunctive or non-linear behaviors. Our approach is conducting abstract interpretation in an iterative forward and backward manner. It utilizes dynamic input space partitioning to split the input space into sub-spaces. Thus each sub-space involves fewer disjunctive and non-linear program behaviors and is easier to be verified. We have implemented our approach. The experimental results are promising.
{"title":"Property Oriented Verification Via Iterative Abstract Interpretation","authors":"Banghu Yin","doi":"10.1109/ICSE-Companion.2019.00067","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00067","url":null,"abstract":"Static analysis by abstract interpretation is one of the widely used automatic approaches to program verification, due to its soundness guarantee and scalability. A key challenge for abstract interpretation with convex and linear abstract domain is verifying complex programs with disjunctive or non-linear behaviors. Our approach is conducting abstract interpretation in an iterative forward and backward manner. It utilizes dynamic input space partitioning to split the input space into sub-spaces. Thus each sub-space involves fewer disjunctive and non-linear program behaviors and is easier to be verified. We have implemented our approach. The experimental results are promising.","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"96 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131652276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-25DOI: 10.1109/ICSE-Companion.2019.00033
Derrick Lockwood, Benjamin Holland, S. Kothari
The paper presents the Mockingbird framework that combines static and dynamic analyses to yield an efficient and scalable approach to analyze large Java software. The framework is an innovative integration of existing static and dynamic analysis tools and a newly developed component called the Object Mocker that enables the integration. The static analyzers are used to extract potentially vulnerable parts from large software. Targeted dynamic analysis is used to analyze just the potentially vulnerable parts to check whether the vulnerability can actually be exploited. We present a case study to illustrate the use of the framework to analyze complex software vulnerabilities. The case study is based on a challenge application from the DARPA Space/Time Analysis for Cybersecurity (STAC) program. Interestingly, the challenge program had been hardened and was thought not to be vulnerable. Yet, using the framework we could discover an unintentional vulnerability that can be exploited for a denial of service attack. The accompanying demo video depicts the case study. Video: https://youtu.be/m9OUWtocWPE
{"title":"Mockingbird: A Framework for Enabling Targeted Dynamic Analysis of Java Programs","authors":"Derrick Lockwood, Benjamin Holland, S. Kothari","doi":"10.1109/ICSE-Companion.2019.00033","DOIUrl":"https://doi.org/10.1109/ICSE-Companion.2019.00033","url":null,"abstract":"The paper presents the Mockingbird framework that combines static and dynamic analyses to yield an efficient and scalable approach to analyze large Java software. The framework is an innovative integration of existing static and dynamic analysis tools and a newly developed component called the Object Mocker that enables the integration. The static analyzers are used to extract potentially vulnerable parts from large software. Targeted dynamic analysis is used to analyze just the potentially vulnerable parts to check whether the vulnerability can actually be exploited. We present a case study to illustrate the use of the framework to analyze complex software vulnerabilities. The case study is based on a challenge application from the DARPA Space/Time Analysis for Cybersecurity (STAC) program. Interestingly, the challenge program had been hardened and was thought not to be vulnerable. Yet, using the framework we could discover an unintentional vulnerability that can be exploited for a denial of service attack. The accompanying demo video depicts the case study. Video: https://youtu.be/m9OUWtocWPE","PeriodicalId":273100,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}