Reactive Programming is a style of programming that provides developers with a set of abstractions that facilitate event handling and stream processing. Traditional debug tools lack support for Reactive Programming, leading developers to fallback to the most rudimentary debug tool available: logging to the console. In this paper, we present the design and implementation of RxFiddle, a visualization and debugging tool targeted to Rx, the most popular form of Reactive Programming. RxFiddle visualizes the dependencies and structure of the data flow, as well as the data inside the flow. We evaluate RxFiddle with an experiment involving 111 developers. The results show that RxFiddle can help developers finish debugging tasks faster than with traditional debugging tools.
{"title":"Debugging Data Flows in Reactive Programs","authors":"Herman Banken, E. Meijer, Georgios Gousios","doi":"10.1145/3180155.3180156","DOIUrl":"https://doi.org/10.1145/3180155.3180156","url":null,"abstract":"Reactive Programming is a style of programming that provides developers with a set of abstractions that facilitate event handling and stream processing. Traditional debug tools lack support for Reactive Programming, leading developers to fallback to the most rudimentary debug tool available: logging to the console. In this paper, we present the design and implementation of RxFiddle, a visualization and debugging tool targeted to Rx, the most popular form of Reactive Programming. RxFiddle visualizes the dependencies and structure of the data flow, as well as the data inside the flow. We evaluate RxFiddle with an experiment involving 111 developers. The results show that RxFiddle can help developers finish debugging tasks faster than with traditional debugging tools.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"30 1","pages":"752-763"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85541804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sonal Mahajan, Negarsadat Abolhassani, Phil McMinn, William G. J. Halfond
Mobile devices have become a primary means of accessing the Internet. Unfortunately, many websites are not designed to be mobile friendly. This results in problems such as unreadable text, cluttered navigation, and content over owing a device's viewport; all of which can lead to a frustrating and poor user experience. Existing techniques are limited in helping developers repair these mobile friendly problems. To address this limitation of prior work, we designed a novel automated approach for repairing mobile friendly problems in web pages. Our empirical evaluation showed that our approach was able to successfully resolve mobile friendly problems in 95% of the evaluation subjects. In a user study, participants preferred our repaired versions of the subjects and also considered the repaired pages to be more readable than the originals.
{"title":"Automated Repair of Mobile Friendly Problems in Web Pages","authors":"Sonal Mahajan, Negarsadat Abolhassani, Phil McMinn, William G. J. Halfond","doi":"10.1145/3180155.3180262","DOIUrl":"https://doi.org/10.1145/3180155.3180262","url":null,"abstract":"Mobile devices have become a primary means of accessing the Internet. Unfortunately, many websites are not designed to be mobile friendly. This results in problems such as unreadable text, cluttered navigation, and content over owing a device's viewport; all of which can lead to a frustrating and poor user experience. Existing techniques are limited in helping developers repair these mobile friendly problems. To address this limitation of prior work, we designed a novel automated approach for repairing mobile friendly problems in web pages. Our empirical evaluation showed that our approach was able to successfully resolve mobile friendly problems in 95% of the evaluation subjects. In a user study, participants preferred our repaired versions of the subjects and also considered the repaired pages to be more readable than the originals.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"178 1","pages":"140-150"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82996122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Head, Caitlin Sadowski, E. Murphy-Hill, Andrea Knight
Without usable and accurate documentation of how to use an API, developers can find themselves deterred from reusing relevant code. In C++, one place developers can find documentation is in a header file. When information is missing, they may look at the corresponding implementation code. To understand what's missing from C++ API documentation and the factors influencing whether it will be fixed, we conducted a mixed-methods study involving two experience sampling surveys with hundreds of developers at the moment they visited implementation code, interviews with 18 of those developers, and interviews with 8 API maintainers. In many cases, updating documentation may provide only limited value for developers, while requiring effort maintainers don't want to invest. We identify a set of questions maintainers and tool developers should consider when improving API-level documentation.
{"title":"When Not to Comment: Questions and Tradeoffs with API Documentation for C++ Projects","authors":"Andrew Head, Caitlin Sadowski, E. Murphy-Hill, Andrea Knight","doi":"10.1145/3180155.3180176","DOIUrl":"https://doi.org/10.1145/3180155.3180176","url":null,"abstract":"Without usable and accurate documentation of how to use an API, developers can find themselves deterred from reusing relevant code. In C++, one place developers can find documentation is in a header file. When information is missing, they may look at the corresponding implementation code. To understand what's missing from C++ API documentation and the factors influencing whether it will be fixed, we conducted a mixed-methods study involving two experience sampling surveys with hundreds of developers at the moment they visited implementation code, interviews with 18 of those developers, and interviews with 8 API maintainers. In many cases, updating documentation may provide only limited value for developers, while requiring effort maintainers don't want to invest. We identify a set of questions maintainers and tool developers should consider when improving API-level documentation.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"66 1","pages":"643-653"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83575452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renzo Degiovanni, Pablo F. Castro, Marcelo Arroyo, Marcelo Ruiz, Nazareno Aguirre, M. Frias
In goal-oriented requirements engineering approaches, conflict analysis has been proposed as an abstraction for risk analysis. Intuitively, given a set of expected goals to be achieved by the system-to-be, a conflict represents a subtle situation that makes goals diverge, i.e., not be satisfiable as a whole. Conflict analysis is typically driven by the identify-assess-control cycle, aimed at identifying, assessing and resolving conflicts that may obstruct the satisfaction of the expected goals. In particular, the assessment step is concerned with evaluating how likely the identified conflicts are, and how likely and severe are their consequences. So far, existing assessment approaches restrict their analysis to obstacles (conflicts that prevent the satisfaction of a single goal), and assume that certain probabilistic information on the domain is provided, that needs to be previously elicited from experienced users, statistical data or simulations. In this paper, we present a novel automated approach to assess how likely a conflict is, that applies to general conflicts (not only obstacles) without requiring probabilistic information on the domain. Intuitively, given the LTL formulation of the domain and of a set of goals to be achieved, we compute goal conflicts, and exploit string model counting techniques to estimate the likelihood of the occurrence of the corresponding conflicting situations and the severity in which these affect the satisfaction of the goals. This information can then be used to prioritize conflicts to be resolved, and suggest which goals to drive attention to for refinements.
{"title":"Goal-Conflict Likelihood Assessment Based on Model Counting","authors":"Renzo Degiovanni, Pablo F. Castro, Marcelo Arroyo, Marcelo Ruiz, Nazareno Aguirre, M. Frias","doi":"10.1145/3180155.3180261","DOIUrl":"https://doi.org/10.1145/3180155.3180261","url":null,"abstract":"In goal-oriented requirements engineering approaches, conflict analysis has been proposed as an abstraction for risk analysis. Intuitively, given a set of expected goals to be achieved by the system-to-be, a conflict represents a subtle situation that makes goals diverge, i.e., not be satisfiable as a whole. Conflict analysis is typically driven by the identify-assess-control cycle, aimed at identifying, assessing and resolving conflicts that may obstruct the satisfaction of the expected goals. In particular, the assessment step is concerned with evaluating how likely the identified conflicts are, and how likely and severe are their consequences. So far, existing assessment approaches restrict their analysis to obstacles (conflicts that prevent the satisfaction of a single goal), and assume that certain probabilistic information on the domain is provided, that needs to be previously elicited from experienced users, statistical data or simulations. In this paper, we present a novel automated approach to assess how likely a conflict is, that applies to general conflicts (not only obstacles) without requiring probabilistic information on the domain. Intuitively, given the LTL formulation of the domain and of a set of goals to be achieved, we compute goal conflicts, and exploit string model counting techniques to estimate the likelihood of the occurrence of the corresponding conflicting situations and the severity in which these affect the satisfaction of the goals. This information can then be used to prioritize conflicts to be resolved, and suggest which goals to drive attention to for refinements.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"38 1","pages":"1125-1135"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89170895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We aim to debug a single failing execution without the assistance from other passing/failing runs. In our context, debugging is a process with substantial uncertainty – lots of decisions have to be made such as what variables shall be inspected first. To deal with such uncertainty, we propose to equip machines with human-like intelligence. Specifically, we develop a highly automated debugging technique that aims to couple human-like reasoning (e.g., dealing with uncertainty and fusing knowledge) with program semantics based analysis, to achieve benefits from the two and mitigate their limitations. We model debugging as a probabilistic inference problem, in which the likelihood of each executed statement instance and variable being correct/faulty is modeled by a random variable. Human knowledge, human-like reasoning rules and program semantics are modeled as conditional probability distributions, also called probabilistic constraints. Solving these constraints identifies the most likely faulty statements. Our results show that the technique is highly effective. It can precisely identify root causes for a set of real-world bugs in a very small number of interactions with developers, much smaller than a recent proposal that does not encode human intelligence. Our user study also confirms that it substantially improves human productivity.
{"title":"Debugging with Intelligence via Probabilistic Inference","authors":"Zhaogui Xu, Shiqing Ma, X. Zhang, Shuofei Zhu, Baowen Xu","doi":"10.1145/3180155.3180237","DOIUrl":"https://doi.org/10.1145/3180155.3180237","url":null,"abstract":"We aim to debug a single failing execution without the assistance from other passing/failing runs. In our context, debugging is a process with substantial uncertainty – lots of decisions have to be made such as what variables shall be inspected first. To deal with such uncertainty, we propose to equip machines with human-like intelligence. Specifically, we develop a highly automated debugging technique that aims to couple human-like reasoning (e.g., dealing with uncertainty and fusing knowledge) with program semantics based analysis, to achieve benefits from the two and mitigate their limitations. We model debugging as a probabilistic inference problem, in which the likelihood of each executed statement instance and variable being correct/faulty is modeled by a random variable. Human knowledge, human-like reasoning rules and program semantics are modeled as conditional probability distributions, also called probabilistic constraints. Solving these constraints identifies the most likely faulty statements. Our results show that the technique is highly effective. It can precisely identify root causes for a set of real-world bugs in a very small number of interactions with developers, much smaller than a recent proposal that does not encode human intelligence. Our user study also confirms that it substantially improves human productivity.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"35 1","pages":"1171-1181"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79268523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Debugging is an inevitable activity in most software projects, often difficult and more time-consuming than expected, giving it the nickname the "dirty little secret of computer science." Surprisingly, we have little knowledge on how software engineers debug software problems in the real world, whether they use dedicated debugging tools, and how knowledgeable they are about debugging. This study aims to shed light on these aspects by following a mixed-methods research approach. We conduct an online survey capturing how 176 developers reflect on debugging. We augment this subjective survey data with objective observations on how 458 developers use the debugger included in their integrated development environments (IDEs) by instrumenting the popular Eclipse and IntelliJ IDEs with the purpose-built plugin WatchDog 2.0. To clarify the insights and discrepancies observed in the previous steps, we followed up by conducting interviews with debugging experts and regular debugging users. Our results indicate that IDE-provided debuggers are not used as often as expected, as "printf debugging" remains a feasible choice for many programmers. Furthermore, both knowledge and use of advanced debugging features are low. These results call to strengthen hands-on debugging experience in computer science curricula and have already refined the implementation of modern IDE debuggers.
{"title":"On the Dichotomy of Debugging Behavior Among Programmers","authors":"M. Beller, N. Spruit, D. Spinellis, A. Zaidman","doi":"10.1145/3180155.3180175","DOIUrl":"https://doi.org/10.1145/3180155.3180175","url":null,"abstract":"Debugging is an inevitable activity in most software projects, often difficult and more time-consuming than expected, giving it the nickname the \"dirty little secret of computer science.\" Surprisingly, we have little knowledge on how software engineers debug software problems in the real world, whether they use dedicated debugging tools, and how knowledgeable they are about debugging. This study aims to shed light on these aspects by following a mixed-methods research approach. We conduct an online survey capturing how 176 developers reflect on debugging. We augment this subjective survey data with objective observations on how 458 developers use the debugger included in their integrated development environments (IDEs) by instrumenting the popular Eclipse and IntelliJ IDEs with the purpose-built plugin WatchDog 2.0. To clarify the insights and discrepancies observed in the previous steps, we followed up by conducting interviews with debugging experts and regular debugging users. Our results indicate that IDE-provided debuggers are not used as often as expected, as \"printf debugging\" remains a feasible choice for many programmers. Furthermore, both knowledge and use of advanced debugging features are low. These results call to strengthen hands-on debugging experience in computer science curricula and have already refined the implementation of modern IDE debuggers.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"52 1","pages":"572-583"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79541256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rolando P. Reyes Ch., Ó. Dieste, E. Fonseca C., N. Juristo
Background: Statistical concepts and techniques are often applied incorrectly, even in mature disciplines such as medicine or psychology. Surprisingly, there are very few works that study statistical problems in software engineering (SE). Aim: Assess the existence of statistical errors in SE experiments. Method: Compile the most common statistical errors in experimental disciplines. Survey experiments published in ICSE to assess whether errors occur in high quality SE publications. Results: The same errors as identified in others disciplines were found in ICSE experiments, where 30 of the reviewed papers included several error types such as: a) missing statistical hypotheses, b) missing sample size calculation, c) failure to assess statistical test assumptions, and d) uncorrected multiple testing. This rather large error rate is greater for research papers where experiments are confined to the validation section. The origin of the errors can be traced back to: a) researchers not having sufficient statistical training, and b) a profusion of exploratory research. Conclusions: This paper provides preliminary evidence that SE research suffers from the same statistical problems as other experimental disciplines. However, the SE community appears to be unaware of any shortcomings in its experiments, whereas other disciplines work hard to avoid these threats. Further research is necessary to find the underlying causes and set up corrective measures, but there are some potentially effective actions and are a priori easy to implement: a) improve the statistical training of SE researchers, and b) enforce quality assessment and reporting guidelines in SE publications.
{"title":"Statistical Errors in Software Engineering Experiments: A Preliminary Literature Review","authors":"Rolando P. Reyes Ch., Ó. Dieste, E. Fonseca C., N. Juristo","doi":"10.1145/3180155.3180161","DOIUrl":"https://doi.org/10.1145/3180155.3180161","url":null,"abstract":"Background: Statistical concepts and techniques are often applied incorrectly, even in mature disciplines such as medicine or psychology. Surprisingly, there are very few works that study statistical problems in software engineering (SE). Aim: Assess the existence of statistical errors in SE experiments. Method: Compile the most common statistical errors in experimental disciplines. Survey experiments published in ICSE to assess whether errors occur in high quality SE publications. Results: The same errors as identified in others disciplines were found in ICSE experiments, where 30 of the reviewed papers included several error types such as: a) missing statistical hypotheses, b) missing sample size calculation, c) failure to assess statistical test assumptions, and d) uncorrected multiple testing. This rather large error rate is greater for research papers where experiments are confined to the validation section. The origin of the errors can be traced back to: a) researchers not having sufficient statistical training, and b) a profusion of exploratory research. Conclusions: This paper provides preliminary evidence that SE research suffers from the same statistical problems as other experimental disciplines. However, the SE community appears to be unaware of any shortcomings in its experiments, whereas other disciplines work hard to avoid these threats. Further research is necessary to find the underlying causes and set up corrective measures, but there are some potentially effective actions and are a priori easy to implement: a) improve the statistical training of SE researchers, and b) enforce quality assessment and reporting guidelines in SE publications.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"13 1","pages":"1195-1206"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78678624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advancements in software build tools such as Maven reduce build management effort, but developers still need specialized knowledge and long time to maintain build scripts and resolve build failures. More recent build tools such as Gradle give developers greater extent of customization flexibility, but can be even more difficult to maintain. According to the TravisTorrent dataset of open-source software continuous integration, 22% of code commits include changes in build script files to maintain build scripts or to resolve build failures. Automated program repair techniques have great potential to reduce cost of resolving software failures, but the existing techniques mostly focus on repairing source code so that they cannot directly help resolving software build failures. To address this limitation, we propose HireBuild: History-Driven Repair of Build Scripts, the first approach to automatic patch generation for build scripts, using fix patterns automatically generated from existing build script fixes and recommending fix patterns based on build log similarity. From TravisTorrent dataset, we extracted 175 build failures and their corresponding fixes which revise Gradle build scripts. Among these 175 build failures, we used the 135 earlier build fixes for automatic fix-pattern generation and the more recent 40 build failures (fixes) for evaluation of our approach. Our experiment shows that our approach can fix 11 of 24 reproducible build failures, or 45% of the reproducible build failures, within comparable time of manual fixes.
{"title":"HireBuild: An Automatic Approach to History-Driven Repair of Build Scripts","authors":"Foyzul Hassan, Xiaoyin Wang","doi":"10.1145/3180155.3180181","DOIUrl":"https://doi.org/10.1145/3180155.3180181","url":null,"abstract":"Advancements in software build tools such as Maven reduce build management effort, but developers still need specialized knowledge and long time to maintain build scripts and resolve build failures. More recent build tools such as Gradle give developers greater extent of customization flexibility, but can be even more difficult to maintain. According to the TravisTorrent dataset of open-source software continuous integration, 22% of code commits include changes in build script files to maintain build scripts or to resolve build failures. Automated program repair techniques have great potential to reduce cost of resolving software failures, but the existing techniques mostly focus on repairing source code so that they cannot directly help resolving software build failures. To address this limitation, we propose HireBuild: History-Driven Repair of Build Scripts, the first approach to automatic patch generation for build scripts, using fix patterns automatically generated from existing build script fixes and recommending fix patterns based on build log similarity. From TravisTorrent dataset, we extracted 175 build failures and their corresponding fixes which revise Gradle build scripts. Among these 175 build failures, we used the 135 earlier build fixes for automatic fix-pattern generation and the more recent 40 build failures (fixes) for evaluation of our approach. Our experiment shows that our approach can fix 11 of 24 reproducible build failures, or 45% of the reproducible build failures, within comparable time of manual fixes.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"98 1 1","pages":"1078-1089"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78163316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing static analyses model activity transitions in Android apps context-insensitively, making it impossible to distinguish different activity launch modes, reducing the pointer analysis precision for an activity's callbacks, and potentially resulting in infeasible activity transition paths. In this paper, we introduce Chime, a launch-mode-aware context-sensitive activity transition analysis that models different instances of an activity class according to its launch mode and the transitions between activities context-sensitively, by working together with an object-sensitive pointer analysis. Our evaluation shows that our context-sensitive activity transition analysis is more precise than its context-insensitive counterpart in capturing activity transitions, facilitating GUI testing, and improving the pointer analysis precision.
{"title":"Launch-Mode-Aware Context-Sensitive Activity Transition Analysis","authors":"Yifei Zhang, Yulei Sui, Jingling Xue","doi":"10.1145/3180155.3180188","DOIUrl":"https://doi.org/10.1145/3180155.3180188","url":null,"abstract":"Existing static analyses model activity transitions in Android apps context-insensitively, making it impossible to distinguish different activity launch modes, reducing the pointer analysis precision for an activity's callbacks, and potentially resulting in infeasible activity transition paths. In this paper, we introduce Chime, a launch-mode-aware context-sensitive activity transition analysis that models different instances of an activity class according to its launch mode and the transitions between activities context-sensitively, by working together with an object-sensitive pointer analysis. Our evaluation shows that our context-sensitive activity transition analysis is more precise than its context-insensitive counterpart in capturing activity transitions, facilitating GUI testing, and improving the pointer analysis precision.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"12 1","pages":"598-608"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73170345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Go programming language has been heavily adopted in industry as a language that efficiently combines systems programming with concurrency. Go's concurrency primitives, inspired by process calculi such as CCS and CSP, feature channel-based communication and lightweight threads, providing a distinct means of structuring concurrent software. Despite its popularity, the Go programming ecosystem offers little to no support for guaranteeing the correctness of message-passing concurrent programs. This work proposes a practical verification framework for message passing concurrency in Go by developing a robust static analysis that infers an abstract model of a program's communication behaviour in the form of a behavioural type, a powerful process calculi typing discipline. We make use of our analysis to deploy a model and termination checking based verification of the inferred behavioural type that is suitable for a range of safety and liveness properties of Go programs, providing several improvements over existing approaches. We evaluate our framework and its implementation on publicly available real-world Go code.
{"title":"A Static Verification Framework for Message Passing in Go Using Behavioural Types","authors":"J. Lange, Nicholas Ng, B. Toninho, N. Yoshida","doi":"10.1145/3180155.3180157","DOIUrl":"https://doi.org/10.1145/3180155.3180157","url":null,"abstract":"The Go programming language has been heavily adopted in industry as a language that efficiently combines systems programming with concurrency. Go's concurrency primitives, inspired by process calculi such as CCS and CSP, feature channel-based communication and lightweight threads, providing a distinct means of structuring concurrent software. Despite its popularity, the Go programming ecosystem offers little to no support for guaranteeing the correctness of message-passing concurrent programs. This work proposes a practical verification framework for message passing concurrency in Go by developing a robust static analysis that infers an abstract model of a program's communication behaviour in the form of a behavioural type, a powerful process calculi typing discipline. We make use of our analysis to deploy a model and termination checking based verification of the inferred behavioural type that is suitable for a range of safety and liveness properties of Go programs, providing several improvements over existing approaches. We evaluate our framework and its implementation on publicly available real-world Go code.","PeriodicalId":6560,"journal":{"name":"2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE)","volume":"70 1","pages":"1137-1148"},"PeriodicalIF":0.0,"publicationDate":"2018-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73912673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}