E. Murphy-Hill, Edward K. Smith, Caitlin Sadowski, Ciera Jaspan, C. Winter, M. Jorde, Andrea Knight, Andrew Trenk, Steve Gross
Maintaining awareness of useful tools is a substantial challenge for developers. Physical newsletters are a simple technique to inform developers about tools. In this paper, we evaluate such a technique, called Testing on the Toilet, by performing a mixed-methods case study. We first quantitatively evaluate how effective this technique is by applying statistical causal inference over six years of data about tools used by thousands of developers. We then qualitatively contextualize these results by interviewing and surveying 382 developers, from authors to editors to readers. We found that the technique was generally effective at increasing software development tool use, although the increase varied depending on factors such as the breadth of applicability of the tool, the extent to which the tool has reached saturation, and the memorability of the tool name.
{"title":"Do Developers Discover New Tools On The Toilet?","authors":"E. Murphy-Hill, Edward K. Smith, Caitlin Sadowski, Ciera Jaspan, C. Winter, M. Jorde, Andrea Knight, Andrew Trenk, Steve Gross","doi":"10.1109/ICSE.2019.00059","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00059","url":null,"abstract":"Maintaining awareness of useful tools is a substantial challenge for developers. Physical newsletters are a simple technique to inform developers about tools. In this paper, we evaluate such a technique, called Testing on the Toilet, by performing a mixed-methods case study. We first quantitatively evaluate how effective this technique is by applying statistical causal inference over six years of data about tools used by thousands of developers. We then qualitatively contextualize these results by interviewing and surveying 382 developers, from authors to editors to readers. We found that the technique was generally effective at increasing software development tool use, although the increase varied depending on factors such as the breadth of applicability of the tool, the extent to which the tool has reached saturation, and the memorability of the tool name.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"1 1","pages":"465-475"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89626508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taeyeon Ki, Chang Min Park, Karthik Dantu, Steven Y. Ko, Lukasz Ziarek
This paper proposes Mimic, an automated UI compatibility testing system for Android apps. Mimic is designed specifically for comparing the UI behavior of an app across different devices, different Android versions, and different app versions. This design choice stems from a common problem that Android developers and researchers face–how to test whether or not an app behaves consistently across different environments or internal changes. Mimic allows Android app developers to easily perform backward and forward compatibility testing for their apps. It also enables a clear comparison between a stable version of app and a newer version of app. In doing so, Mimic allows multiple testing strategies to be used, such as randomized or sequential testing. Finally, Mimic programming model allows such tests to be scripted with much less developer effort than other comparable systems. Additionally, Mimic allows parallel testing with multiple testing devices and thereby speeds up testing time. To demonstrate these capabilities, we perform extensive tests for each of the scenarios described above. Our results show that Mimic is effective in detecting forward and backward compatibility issues, and verify runtime behavior of apps. Our evaluation also shows that Mimic significantly reduces the development burden for developers.
{"title":"Mimic: UI Compatibility Testing System for Android Apps","authors":"Taeyeon Ki, Chang Min Park, Karthik Dantu, Steven Y. Ko, Lukasz Ziarek","doi":"10.1109/ICSE.2019.00040","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00040","url":null,"abstract":"This paper proposes Mimic, an automated UI compatibility testing system for Android apps. Mimic is designed specifically for comparing the UI behavior of an app across different devices, different Android versions, and different app versions. This design choice stems from a common problem that Android developers and researchers face–how to test whether or not an app behaves consistently across different environments or internal changes. Mimic allows Android app developers to easily perform backward and forward compatibility testing for their apps. It also enables a clear comparison between a stable version of app and a newer version of app. In doing so, Mimic allows multiple testing strategies to be used, such as randomized or sequential testing. Finally, Mimic programming model allows such tests to be scripted with much less developer effort than other comparable systems. Additionally, Mimic allows parallel testing with multiple testing devices and thereby speeds up testing time. To demonstrate these capabilities, we perform extensive tests for each of the scenarios described above. Our results show that Mimic is effective in detecting forward and backward compatibility issues, and verify runtime behavior of apps. Our evaluation also shows that Mimic significantly reduces the development burden for developers.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"115 16 1","pages":"246-256"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84189262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Program verification is one of the most important methods to ensuring the correctness of concurrent programs. However, due to the path explosion problem, concurrent program verification is usually time consuming, which hinders its scalability to industrial programs. Parallel processing is a mainstream technique to deal with those problems which require mass computing. Hence, designing parallel algorithms to improve the performance of concurrent program verification is highly desired. This paper focuses on parallelization of the abstraction refinement technique, one of the most efficient techniques for concurrent program verification. We present a parallel refinement framework which employs multiple engines to refine the abstraction in parallel. Different from existing work which parallelizes the search process, our method achieves the effect of parallelization by refinement constraint and learnt clause sharing, so that the number of required iterations can be significantly reduced. We have implemented this framework on the scheduling constraint based abstraction refinement method, one of the best methods for concurrent program verification. Experiments on SV-COMP 2018 show the encouraging results of our method. For those complex programs requiring a large number of iterations, our method can obtain a linear reduction of the iteration number and significantly improve the verification performance.
{"title":"Parallel Refinement for Multi-Threaded Program Verification","authors":"Liangze Yin, Wei Dong, Wanwei Liu, J. Wang","doi":"10.1109/ICSE.2019.00074","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00074","url":null,"abstract":"Program verification is one of the most important methods to ensuring the correctness of concurrent programs. However, due to the path explosion problem, concurrent program verification is usually time consuming, which hinders its scalability to industrial programs. Parallel processing is a mainstream technique to deal with those problems which require mass computing. Hence, designing parallel algorithms to improve the performance of concurrent program verification is highly desired. This paper focuses on parallelization of the abstraction refinement technique, one of the most efficient techniques for concurrent program verification. We present a parallel refinement framework which employs multiple engines to refine the abstraction in parallel. Different from existing work which parallelizes the search process, our method achieves the effect of parallelization by refinement constraint and learnt clause sharing, so that the number of required iterations can be significantly reduced. We have implemented this framework on the scheduling constraint based abstraction refinement method, one of the best methods for concurrent program verification. Experiments on SV-COMP 2018 show the encouraging results of our method. For those complex programs requiring a large number of iterations, our method can obtain a linear reduction of the iteration number and significantly improve the verification performance.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"17 1","pages":"643-653"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83103831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new technique for developing a resource-aware program analysis. Such an analysis is aware of constraints on available physical resources, such as memory size, tracks its resource use, and adjusts its behaviors during fixpoint computation in order to meet the constraint and achieve high precision. Our resource-aware analysis adjusts behaviors by coarsening program abstraction, which usually makes the analysis consume less memory and time until completion. It does so multiple times during the analysis, under the direction of what we call a controller. The controller constantly intervenes in the fixpoint computation of the analysis and decides how much the analysis should coarsen the abstraction. We present an algorithm for learning a good controller automatically from benchmark programs. We applied our technique to a static analysis for C programs, where we control the degree of flow-sensitivity to meet a constraint on peak memory consumption. The experimental results with 18 real-world programs show that our algorithm can learn a good controller and the analysis with this controller meets the constraint and utilizes available memory effectively.
{"title":"Resource-Aware Program Analysis Via Online Abstraction Coarsening","authors":"K. Heo, Hakjoo Oh, Hongseok Yang","doi":"10.1109/ICSE.2019.00027","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00027","url":null,"abstract":"We present a new technique for developing a resource-aware program analysis. Such an analysis is aware of constraints on available physical resources, such as memory size, tracks its resource use, and adjusts its behaviors during fixpoint computation in order to meet the constraint and achieve high precision. Our resource-aware analysis adjusts behaviors by coarsening program abstraction, which usually makes the analysis consume less memory and time until completion. It does so multiple times during the analysis, under the direction of what we call a controller. The controller constantly intervenes in the fixpoint computation of the analysis and decides how much the analysis should coarsen the abstraction. We present an algorithm for learning a good controller automatically from benchmark programs. We applied our technique to a static analysis for C programs, where we control the degree of flow-sensitivity to meet a constraint on peak memory consumption. The experimental results with 18 real-world programs show that our algorithm can learn a good controller and the analysis with this controller meets the constraint and utilizes available memory effectively.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"39 1","pages":"94-104"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78888472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated Program Repair (APR) faces a key challenge in efficiently generating correct patches from a potentially infinite solution space. Existing approaches, which attempt to reason about the entire solution space, can be ineffective (by often producing no plausible patches at all) and imprecise (by often producing plausible but incorrect patches). We present VFIX, a new value-flow-guided APR approach, to fix null pointer exception (NPE) bugs by considering a substantially reduced solution space in order to greatly increase the number of correct patches generated. By reasoning about the data and control dependences in the program, VFIX can identify bug-relevant repair statements more accurately and generate more correct repairs than before. VFIX outperforms a set of 8 state-of-the-art APR tools in fixing the NPE bugs in Defects4j in terms of both precision (by correctly fixing 3 times as many bugs as the most precise one and 50% more than all the bugs correctly fixed by these 8 tools altogether) and efficiency (by producing a correct patch in minutes instead of hours).
{"title":"VFix: Value-Flow-Guided Precise Program Repair for Null Pointer Dereferences","authors":"Xuezheng Xu, Yulei Sui, Hua Yan, Jingling Xue","doi":"10.1109/ICSE.2019.00063","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00063","url":null,"abstract":"Automated Program Repair (APR) faces a key challenge in efficiently generating correct patches from a potentially infinite solution space. Existing approaches, which attempt to reason about the entire solution space, can be ineffective (by often producing no plausible patches at all) and imprecise (by often producing plausible but incorrect patches). We present VFIX, a new value-flow-guided APR approach, to fix null pointer exception (NPE) bugs by considering a substantially reduced solution space in order to greatly increase the number of correct patches generated. By reasoning about the data and control dependences in the program, VFIX can identify bug-relevant repair statements more accurately and generate more correct repairs than before. VFIX outperforms a set of 8 state-of-the-art APR tools in fixing the NPE bugs in Defects4j in terms of both precision (by correctly fixing 3 times as many bugs as the most precise one and 50% more than all the bugs correctly fixed by these 8 tools altogether) and efficiency (by producing a correct patch in minutes instead of hours).","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"28 1","pages":"512-523"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72678169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emilio Cruciani, Breno Miranda, R. Verdecchia, A. Bertolino
Test suite reduction approaches aim at decreasing software regression testing costs by selecting a representative subset from large-size test suites. Most existing techniques are too expensive for handling modern massive systems and moreover depend on artifacts, such as code coverage metrics or specification models, that are not commonly available at large scale. We present a family of novel very efficient approaches for similaritybased test suite reduction that apply algorithms borrowed from the big data domain together with smart heuristics for finding an evenly spread subset of test cases. The approaches are very general since they only use as input the test cases themselves (test source code or command line input).We evaluate four approaches in a version that selects a fixed budget B of test cases, and also in an adequate version that does the reduction guaranteeing some fixed coverage. The results show that the approaches yield a fault detection loss comparable to state-of-the-art techniques, while providing huge gains in terms of efficiency. When applied to a suite of more than 500K real world test cases, the most efficient of the four approaches could select B test cases (for varying B values) in less than 10 seconds.
{"title":"Scalable Approaches for Test Suite Reduction","authors":"Emilio Cruciani, Breno Miranda, R. Verdecchia, A. Bertolino","doi":"10.1109/ICSE.2019.00055","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00055","url":null,"abstract":"Test suite reduction approaches aim at decreasing software regression testing costs by selecting a representative subset from large-size test suites. Most existing techniques are too expensive for handling modern massive systems and moreover depend on artifacts, such as code coverage metrics or specification models, that are not commonly available at large scale. We present a family of novel very efficient approaches for similaritybased test suite reduction that apply algorithms borrowed from the big data domain together with smart heuristics for finding an evenly spread subset of test cases. The approaches are very general since they only use as input the test cases themselves (test source code or command line input).We evaluate four approaches in a version that selects a fixed budget B of test cases, and also in an adequate version that does the reduction guaranteeing some fixed coverage. The results show that the approaches yield a fault detection loss comparable to state-of-the-art techniques, while providing huge gains in terms of efficiency. When applied to a suite of more than 500K real world test cases, the most efficient of the four approaches could select B test cases (for varying B values) in less than 10 seconds.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"19 1","pages":"419-429"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73634269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Hao, Yang Feng, James A. Jones, Yuying Li, Zhenyu Chen
Crowdsourced testing has been widely adopted to improve the quality of various software products. Crowdsourced workers typically perform testing tasks and report their experiences through test reports. While the crowdsourced test reports provide feedbacks from real usage scenarios, inspecting such a large number of reports becomes a time-consuming yet inevitable task. To improve the efficiency of this task, existing widely used issue-tracking systems, such as JIRA, Bugzilla, and Mantis, have provided keyword-search-based methods to assist users in identifying duplicate test reports. However, on mobile devices (such as mobile phones), where the crowdsourced test reports often contain insufficient text descriptions but instead rich screenshots, these text-analysis-based methods become less effective because the data has fundamentally changed. In this paper, instead of focusing on only detecting duplicates based on textual descriptions, we present CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report. To validate CTRAS, we conducted quantitative studies using more than 5000 test reports, collected from 12 industrial crowdsourced projects. The experimental results reveal that CTRAS can reach an accuracy of 0.87, on average, regarding automatically detecting duplicate reports, and it outperforms the classic Max-Coverage-based and MMR summarization methods under Jensen Shannon divergence metric. Moreover, we conducted a task-based user study with 30 participants, whose result indicates that CTRAS can save nearly 30% time cost on average without loss of correctness.
{"title":"CTRAS: Crowdsourced Test Report Aggregation and Summarization","authors":"Rui Hao, Yang Feng, James A. Jones, Yuying Li, Zhenyu Chen","doi":"10.1109/ICSE.2019.00096","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00096","url":null,"abstract":"Crowdsourced testing has been widely adopted to improve the quality of various software products. Crowdsourced workers typically perform testing tasks and report their experiences through test reports. While the crowdsourced test reports provide feedbacks from real usage scenarios, inspecting such a large number of reports becomes a time-consuming yet inevitable task. To improve the efficiency of this task, existing widely used issue-tracking systems, such as JIRA, Bugzilla, and Mantis, have provided keyword-search-based methods to assist users in identifying duplicate test reports. However, on mobile devices (such as mobile phones), where the crowdsourced test reports often contain insufficient text descriptions but instead rich screenshots, these text-analysis-based methods become less effective because the data has fundamentally changed. In this paper, instead of focusing on only detecting duplicates based on textual descriptions, we present CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report. To validate CTRAS, we conducted quantitative studies using more than 5000 test reports, collected from 12 industrial crowdsourced projects. The experimental results reveal that CTRAS can reach an accuracy of 0.87, on average, regarding automatically detecting duplicate reports, and it outperforms the classic Max-Coverage-based and MMR summarization methods under Jensen Shannon divergence metric. Moreover, we conducted a task-based user study with 30 participants, whose result indicates that CTRAS can save nearly 30% time cost on average without loss of correctness.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"121 1","pages":"900-911"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76044343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online advertising is the most critical revenue stream for many Internet companies. However, showing ads on websites comes with a price tag. Since website contents and third-party ads are blended together, third-party ads may compete with the publisher contents, delaying or even breaking the rendering of first-party contents. In addition, dynamically including scripts from ad networks all over the world may introduce buggy scripts that slow down page loads and even freeze the browser. The resulting poor usability problems lead to bad user experience and lower profits. The problems caused by such resource abusing ads are originated from two root causes: First, content publishers have no control over third-party ads. Second, publishers cannot differentiate resource consumed by ads from that consumed by their own contents. To address these challenges, we propose an effective technique, AdJust, that allows publishers to specify constraints on events associated with third-party ads (e.g., URL requests, HTML element creations, and timers), so that they can mitigate user experience degradations and enforce consistent ads experience to all users. We report on a series of experiments over the Alexa top 200 news websites. The results point to the efficacy of our proposed techniques: AdJust effectively mitigated degradations that freeze web browsers (on 36 websites), reduced the load time of publisher contents (on 61 websites), prioritized publisher contents (on 166 websites) and ensured consistent rendering orders among top ads (on 68 websites).
{"title":"AdJust: Runtime Mitigation of Resource Abusing Third-Party Online Ads","authors":"Weihang Wang, I. L. Kim, Yunhui Zheng","doi":"10.1109/ICSE.2019.00105","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00105","url":null,"abstract":"Online advertising is the most critical revenue stream for many Internet companies. However, showing ads on websites comes with a price tag. Since website contents and third-party ads are blended together, third-party ads may compete with the publisher contents, delaying or even breaking the rendering of first-party contents. In addition, dynamically including scripts from ad networks all over the world may introduce buggy scripts that slow down page loads and even freeze the browser. The resulting poor usability problems lead to bad user experience and lower profits. The problems caused by such resource abusing ads are originated from two root causes: First, content publishers have no control over third-party ads. Second, publishers cannot differentiate resource consumed by ads from that consumed by their own contents. To address these challenges, we propose an effective technique, AdJust, that allows publishers to specify constraints on events associated with third-party ads (e.g., URL requests, HTML element creations, and timers), so that they can mitigate user experience degradations and enforce consistent ads experience to all users. We report on a series of experiments over the Alexa top 200 news websites. The results point to the efficacy of our proposed techniques: AdJust effectively mitigated degradations that freeze web browsers (on 36 websites), reduced the load time of publisher contents (on 61 websites), prioritized publisher contents (on 166 websites) and ensured consistent rendering orders among top ads (on 68 websites).","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"7 1","pages":"1005-1015"},"PeriodicalIF":0.0,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81987361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoning Chang, Wensheng Dou, Yu Gao, Jie Wang, Jun Wei, Tao Huang
Node.js has been widely-used as an event-driven server-side architecture. To improve performance, a task in a Node.js application is usually divided into a group of events, which are non-deterministically scheduled by Node.js. Developers may assume that the group of events (named atomic event group) should be atomically processed, without interruption. However, the atomicity of an atomic event group is not guaranteed by Node.js, and thus other events may interrupt the execution of the atomic event group, break down the atomicity and cause unexpected results. Existing approaches mainly focus on event race among two events, and cannot detect high-level atomicity violations among a group of events. In this paper, we propose NodeAV, which can predictively detect atomicity violations in Node.js applications based on an execution trace. Based on happens-before relations among events in an execution trace, we automatically identify a pair of events that should be atomically processed, and use predefined atomicity violation patterns to detect atomicity violations. We have evaluated NodeAV on real-world Node.js applications. The experimental results show that NodeAV can effectively detect atomicity violations in these Node.js applications.
{"title":"Detecting Atomicity Violations for Event-Driven Node.js Applications","authors":"Xiaoning Chang, Wensheng Dou, Yu Gao, Jie Wang, Jun Wei, Tao Huang","doi":"10.1109/ICSE.2019.00073","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00073","url":null,"abstract":"Node.js has been widely-used as an event-driven server-side architecture. To improve performance, a task in a Node.js application is usually divided into a group of events, which are non-deterministically scheduled by Node.js. Developers may assume that the group of events (named atomic event group) should be atomically processed, without interruption. However, the atomicity of an atomic event group is not guaranteed by Node.js, and thus other events may interrupt the execution of the atomic event group, break down the atomicity and cause unexpected results. Existing approaches mainly focus on event race among two events, and cannot detect high-level atomicity violations among a group of events. In this paper, we propose NodeAV, which can predictively detect atomicity violations in Node.js applications based on an execution trace. Based on happens-before relations among events in an execution trace, we automatically identify a pair of events that should be atomically processed, and use predefined atomicity violation patterns to detect atomicity violations. We have evaluated NodeAV on real-world Node.js applications. The experimental results show that NodeAV can effectively detect atomicity violations in these Node.js applications.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"5 1","pages":"631-642"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74247942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huilian Sophie Qiu, Alexander Nolte, Anita R. Brown, Alexander Serebrenik, Bogdan Vasilescu
Sustained participation by contributors in opensource software is critical to the survival of open-source projects and can provide career advancement benefits to individual contributors. However, not all contributors reap the benefits of open-source participation fully, with prior work showing that women are particularly underrepresented and at higher risk of disengagement. While many barriers to participation in open-source have been documented in the literature, relatively little is known about how the social networks that open-source contributors form impact their chances of long-term engagement. In this paper we report on a mixed-methods empirical study of the role of social capital (i.e., the resources people can gain from their social connections) for sustained participation by women and men in open-source GitHub projects. After combining survival analysis on a large, longitudinal data set with insights derived from a user survey, we confirm that while social capital is beneficial for prolonged engagement for both genders, women are at disadvantage in teams lacking diversity in expertise.
{"title":"Going Farther Together: The Impact of Social Capital on Sustained Participation in Open Source","authors":"Huilian Sophie Qiu, Alexander Nolte, Anita R. Brown, Alexander Serebrenik, Bogdan Vasilescu","doi":"10.1109/ICSE.2019.00078","DOIUrl":"https://doi.org/10.1109/ICSE.2019.00078","url":null,"abstract":"Sustained participation by contributors in opensource software is critical to the survival of open-source projects and can provide career advancement benefits to individual contributors. However, not all contributors reap the benefits of open-source participation fully, with prior work showing that women are particularly underrepresented and at higher risk of disengagement. While many barriers to participation in open-source have been documented in the literature, relatively little is known about how the social networks that open-source contributors form impact their chances of long-term engagement. In this paper we report on a mixed-methods empirical study of the role of social capital (i.e., the resources people can gain from their social connections) for sustained participation by women and men in open-source GitHub projects. After combining survival analysis on a large, longitudinal data set with insights derived from a user survey, we confirm that while social capital is beneficial for prolonged engagement for both genders, women are at disadvantage in teams lacking diversity in expertise.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":"12 1","pages":"688-699"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84423628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}