Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00082
Thomas Krismayer, Michael Vierhauser, Rick Rabiser, P. Grünbacher
Complex software systems evolve frequently, e.g., when introducing new features or fixing bugs during maintenance. However, understanding the impact of such changes on system behavior is often difficult. Many approaches have thus been proposed that analyze systems before and after changes, e.g., by comparing source code, model-based representations, or system execution logs. In this paper, we propose an approach for comparing run-time constraints, synthesized by a constraint mining algorithm, based on execution logs recorded before and after changes. Specifically, automatically mined constraints define the expected timing and order of recurring events and the values of data elements attached to events. Our approach presents the differences of the mined constraints to users, thereby providing a higher-level view on software evolution and supporting the analysis of the impact of changes on system behavior. We present a motivating example and a preliminary evaluation based on a cyber-physical system controlling unmanned aerial vehicles. The results of our preliminary evaluation show that our approach can help to analyze changed behavior and thus contributes to understanding software evolution.
{"title":"Comparing Constraints Mined From Execution Logs to Understand Software Evolution","authors":"Thomas Krismayer, Michael Vierhauser, Rick Rabiser, P. Grünbacher","doi":"10.1109/ICSME.2019.00082","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00082","url":null,"abstract":"Complex software systems evolve frequently, e.g., when introducing new features or fixing bugs during maintenance. However, understanding the impact of such changes on system behavior is often difficult. Many approaches have thus been proposed that analyze systems before and after changes, e.g., by comparing source code, model-based representations, or system execution logs. In this paper, we propose an approach for comparing run-time constraints, synthesized by a constraint mining algorithm, based on execution logs recorded before and after changes. Specifically, automatically mined constraints define the expected timing and order of recurring events and the values of data elements attached to events. Our approach presents the differences of the mined constraints to users, thereby providing a higher-level view on software evolution and supporting the analysis of the impact of changes on system behavior. We present a motivating example and a preliminary evaluation based on a cyber-physical system controlling unmanned aerial vehicles. The results of our preliminary evaluation show that our approach can help to analyze changed behavior and thus contributes to understanding software evolution.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129024669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00080
Huong Ha, Hongyu Zhang
Many software systems are highly configurable, which provide a large number of configuration options for users to choose from. During the maintenance and operation of these configurable systems, it is important to estimate the system performance under any specific configurations and understand the performance-influencing configuration options. However, it is often not feasible to measure the system performance under all the possible configurations as the combination of configurations could be exponential. In this paper, we propose PerLasso, a performance modeling and prediction method based on Fourier Learning and Lasso (Least absolute shrinkage and selection operator) regression techniques. Using a small sample of measured performance values of a configurable system, PerLasso produces a performance-influence model, which can 1) predict system performance under a new configuration; 2) explain the influence of the individual features and their interactions on the software performance. Besides, to reduce the number of Fourier coefficients to be estimated for large-scale systems, we also design a novel dimension reduction algorithm. Our experimental results on four synthetic and six real-world datasets confirm the effectiveness of our approach. Compared to the existing performance-influence models, our models have higher or comparable prediction accuracy.
{"title":"Performance-Influence Model for Highly Configurable Software with Fourier Learning and Lasso Regression","authors":"Huong Ha, Hongyu Zhang","doi":"10.1109/ICSME.2019.00080","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00080","url":null,"abstract":"Many software systems are highly configurable, which provide a large number of configuration options for users to choose from. During the maintenance and operation of these configurable systems, it is important to estimate the system performance under any specific configurations and understand the performance-influencing configuration options. However, it is often not feasible to measure the system performance under all the possible configurations as the combination of configurations could be exponential. In this paper, we propose PerLasso, a performance modeling and prediction method based on Fourier Learning and Lasso (Least absolute shrinkage and selection operator) regression techniques. Using a small sample of measured performance values of a configurable system, PerLasso produces a performance-influence model, which can 1) predict system performance under a new configuration; 2) explain the influence of the individual features and their interactions on the software performance. Besides, to reduce the number of Fourier coefficients to be estimated for large-scale systems, we also design a novel dimension reduction algorithm. Our experimental results on four synthetic and six real-world datasets confirm the effectiveness of our approach. Compared to the existing performance-influence models, our models have higher or comparable prediction accuracy.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134527342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00030
Chunhua Yang, E. J. Whitehead
As current tree-differencing approaches ignore changes that occur within a statement or do not present them in an abstract way, it is difficult to automatically understand revisions involving statement updates. We propose a tree-differencing approach to identifying the within-statement changes. It calculates edit operations based on an element-sensitive strategy and the longest common sequence algorithm. Then, it generates the metadata for each edit operation. Meta-data include the type of operation, the type of entity and the name of the element part, the content, the content pattern and all references involved. We have implemented the approach as a free accessible tool. It is built upon ChangeDistiller and refines its statement-update type. Finally, to demonstrate how to use the proposed approach for change understanding, we studied the condition-expression changes in four open projects. We analyzed the non-essential condition changes, the effective changes that definitely affect the condition, and other changes. The results show that for revisions with condition-expression changes, nearly 20% contain non-essential changes, while more than 60% have effective changes. Furthermore, we found many common patterns. For example, we found that half of the revisions with effective changes were caused by adding or removing expressions in logical expressions. And, in these revisions, 47% enhanced the condition, while 49% weakened it.
{"title":"Identifying the Within-Statement Changes to Facilitate Change Understanding","authors":"Chunhua Yang, E. J. Whitehead","doi":"10.1109/ICSME.2019.00030","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00030","url":null,"abstract":"As current tree-differencing approaches ignore changes that occur within a statement or do not present them in an abstract way, it is difficult to automatically understand revisions involving statement updates. We propose a tree-differencing approach to identifying the within-statement changes. It calculates edit operations based on an element-sensitive strategy and the longest common sequence algorithm. Then, it generates the metadata for each edit operation. Meta-data include the type of operation, the type of entity and the name of the element part, the content, the content pattern and all references involved. We have implemented the approach as a free accessible tool. It is built upon ChangeDistiller and refines its statement-update type. Finally, to demonstrate how to use the proposed approach for change understanding, we studied the condition-expression changes in four open projects. We analyzed the non-essential condition changes, the effective changes that definitely affect the condition, and other changes. The results show that for revisions with condition-expression changes, nearly 20% contain non-essential changes, while more than 60% have effective changes. Furthermore, we found many common patterns. For example, we found that half of the revisions with effective changes were caused by adding or removing expressions in logical expressions. And, in these revisions, 47% enhanced the condition, while 49% weakened it.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133708164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00018
Aindrila Sarkar, Peter C. Rigby, Béla Bartalos
Correctly assigning bugs to the right developer or team, i.e. bug triaging, is a costly activity. A concerted effort at Ericsson has been done to adopt automated bug triaging to reduce development costs. In this work, we replicate the research approaches that have been widely used in the literature. We apply them on over 10k bug reports for 9 large products at Ericsson. We find that a logistic regression classifier including the simple textual and categorical attributes of the bug reports has the highest precision and recall of 78.09% and 79.00%, respectively. Ericsson's bug reports often contain logs that have crash dumps and alarms. We add this information to the bug triage models. We find that this information does not improve the precision and recall of bug triaging in Ericsson's context. Although our models perform as well as the best ones reported in the literature, a criticism of bug triaging at Ericsson is that the accuracy is not sufficient for regular use. We develop a novel approach where we only triage bugs when the model has high confidence in the triage prediction. We find that we improve the accuracy to 90%, but we can make predictions for 62% of the bug reports.
{"title":"Improving Bug Triaging with High Confidence Predictions at Ericsson","authors":"Aindrila Sarkar, Peter C. Rigby, Béla Bartalos","doi":"10.1109/ICSME.2019.00018","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00018","url":null,"abstract":"Correctly assigning bugs to the right developer or team, i.e. bug triaging, is a costly activity. A concerted effort at Ericsson has been done to adopt automated bug triaging to reduce development costs. In this work, we replicate the research approaches that have been widely used in the literature. We apply them on over 10k bug reports for 9 large products at Ericsson. We find that a logistic regression classifier including the simple textual and categorical attributes of the bug reports has the highest precision and recall of 78.09% and 79.00%, respectively. Ericsson's bug reports often contain logs that have crash dumps and alarms. We add this information to the bug triage models. We find that this information does not improve the precision and recall of bug triaging in Ericsson's context. Although our models perform as well as the best ones reported in the literature, a criticism of bug triaging at Ericsson is that the accuracy is not sufficient for regular use. We develop a novel approach where we only triage bugs when the model has high confidence in the triage prediction. We find that we improve the accuracy to 90%, but we can make predictions for 62% of the bug reports.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131111159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00064
S. Levin, A. Yehudai
In the era of Big Code, when researchers seek to study an increasingly large number of repositories to support their findings, the data processing stage may require manipulating millions and more of records. In this work we focus on studies involving fine-grained AST level source code changes. We present how we extended the CodeDistillery source code mining framework with data manipulation capabilities, aimed to alleviate the processing of large datasets of fine grained source code changes. The capabilities we have introduced allow researchers to highly automate their repository mining process and streamline the data acquisition and processing phases. These capabilities have been successfully used to conduct a number of studies, in the course of which dozens of millions of fine-grained source code changes have been processed.
{"title":"Processing Large Datasets of Fined Grained Source Code Changes","authors":"S. Levin, A. Yehudai","doi":"10.1109/ICSME.2019.00064","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00064","url":null,"abstract":"In the era of Big Code, when researchers seek to study an increasingly large number of repositories to support their findings, the data processing stage may require manipulating millions and more of records. In this work we focus on studies involving fine-grained AST level source code changes. We present how we extended the CodeDistillery source code mining framework with data manipulation capabilities, aimed to alleviate the processing of large datasets of fine grained source code changes. The capabilities we have introduced allow researchers to highly automate their repository mining process and streamline the data acquisition and processing phases. These capabilities have been successfully used to conduct a number of studies, in the course of which dozens of millions of fine-grained source code changes have been processed.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134627969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00016
Mian Wan, Negarsadat Abolhassani, Ali S. Alotaibi, William G. J. Halfond
Mobile app developers are able to design sophisticated user interfaces (UIs) that can improve a user's experience and contribute to an app's success. Developers invest in automated UI testing techniques, such as crawlers, to ensure that their app's UIs have a high level of quality. However, UI implementation mechanisms have changed significantly due to the availability of new APIs and mechanisms, such as fragments. In this paper, we study a large set of real-world apps to identify whether the mechanisms developers use to implement app UIs cause problems for those automated techniques. In addition, we examined the changes in these practices over time. Our results indicate that dynamic analyses face challenges in terms of completeness and changing development practices motivate the use of additional analyses, such as static analyses. We also discuss the implications of our results for current testing techniques and the design of new analyses.
{"title":"An Empirical Study of UI Implementations in Android Applications","authors":"Mian Wan, Negarsadat Abolhassani, Ali S. Alotaibi, William G. J. Halfond","doi":"10.1109/ICSME.2019.00016","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00016","url":null,"abstract":"Mobile app developers are able to design sophisticated user interfaces (UIs) that can improve a user's experience and contribute to an app's success. Developers invest in automated UI testing techniques, such as crawlers, to ensure that their app's UIs have a high level of quality. However, UI implementation mechanisms have changed significantly due to the availability of new APIs and mechanisms, such as fragments. In this paper, we study a large set of real-world apps to identify whether the mechanisms developers use to implement app UIs cause problems for those automated techniques. In addition, we examined the changes in these practices over time. Our results indicate that dynamic analyses face challenges in terms of completeness and changing development practices motivate the use of additional analyses, such as static analyses. We also discuss the implications of our results for current testing techniques and the design of new analyses.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122653626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00095
Basma S. Alqadi
Researchers have identified several quality metrics to predict defects, relying on different information however, these approaches lack metrics to estimate the effort of program understandability of system artifacts. In this research, novel metrics to compute the cognitive complexity of a slice are introduced. These metrics help identify code that is more likely to have defects due to being challenging to comprehension. The metrics include such measures as the total number of slices in a file, the size, the average number of identifiers, and the average spatial distance of a slice. Empirical investigation into how cognitive complexity correlates with defects in the version histories of 3 open-source systems is performed. The results show that the increase of cognitive complexity significantly increases the number of defects in 93% of the cases.
{"title":"The Relationship Between Cognitive Complexity and the Probability of Defects","authors":"Basma S. Alqadi","doi":"10.1109/ICSME.2019.00095","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00095","url":null,"abstract":"Researchers have identified several quality metrics to predict defects, relying on different information however, these approaches lack metrics to estimate the effort of program understandability of system artifacts. In this research, novel metrics to compute the cognitive complexity of a slice are introduced. These metrics help identify code that is more likely to have defects due to being challenging to comprehension. The metrics include such measures as the total number of slices in a file, the size, the average number of identifiers, and the average spatial distance of a slice. Empirical investigation into how cognitive complexity correlates with defects in the version histories of 3 open-source systems is performed. The results show that the increase of cognitive complexity significantly increases the number of defects in 93% of the cases.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"74 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124878404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00050
Moumita Asad, Kishan Kumar Ganguly, K. Sakib
Patch prioritization means sorting candidate patches based on probability of correctness. It helps to minimize the bug fixing time and maximize the precision of an automated program repairing technique. Approaches in the literature use either syntactic or semantic similarity between faulty code and fixing element to prioritize patches. Unlike others, this paper aims at analyzing the impact of combining syntactic and semantic similarities on patch prioritization. As a pilot study, it uses genealogical and variable similarity to measure semantic similarity, and normalized longest common subsequence to capture syntactic similarity. For evaluating the approach, 22 replacement mutation bugs from IntroClassJava benchmark were used. The approach repairs all the 22 bugs and achieves a precision of 100%.
{"title":"Impact Analysis of Syntactic and Semantic Similarities on Patch Prioritization in Automated Program Repair","authors":"Moumita Asad, Kishan Kumar Ganguly, K. Sakib","doi":"10.1109/ICSME.2019.00050","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00050","url":null,"abstract":"Patch prioritization means sorting candidate patches based on probability of correctness. It helps to minimize the bug fixing time and maximize the precision of an automated program repairing technique. Approaches in the literature use either syntactic or semantic similarity between faulty code and fixing element to prioritize patches. Unlike others, this paper aims at analyzing the impact of combining syntactic and semantic similarities on patch prioritization. As a pilot study, it uses genealogical and variable similarity to measure semantic similarity, and normalized longest common subsequence to capture syntactic similarity. For evaluating the approach, 22 replacement mutation bugs from IntroClassJava benchmark were used. The approach repairs all the 22 bugs and achieves a precision of 100%.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116455683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00026
Manishankar Mondal, B. Roy, C. Roy, Kevin A. Schneider
The identical or nearly similar code fragments in a code-base are called code clones. There is a common belief that code cloning (copy/pasting code fragments) can introduce bugs in a software system if the copied code fragments are not properly adapted to their contexts (i.e., surrounding code). However, none of the existing studies have investigated whether such bugs are really present in code clones. We denote these bugs as Context Adaptation Bugs, or simply Context-Bugs, in our paper and investigate the extent to which they can be present in code clones. We define and automatically analyze two clone evolutionary patterns that indicate fixing of Context-Bugs. According to our analysis on thousands of revisions of six open-source subject systems written in Java, C, and C#, code cloning often introduces Context-Bugs in software systems. Around 50% of the clone related bug-fixes can occur for fixing Context-Bugs. Cloning (copy/pasting) a newly created code fragment (i.e., a code fragment that was not added in a former revision) is more likely to introduce Context-Bugs compared to cloning a preexisting fragment (i.e., a code fragment that was added in a former revision). Moreover, cloning across different files appears to have a significantly higher tendency of introducing Context-Bugs compared to cloning within the same file. Finally, Type 3 clones (gapped clones) have the highest tendency of containing Context-Bugs among the three major clone-types. Our findings can be important for early detection as well as removal of Context-Bugs in code clones.
{"title":"Investigating Context Adaptation Bugs in Code Clones","authors":"Manishankar Mondal, B. Roy, C. Roy, Kevin A. Schneider","doi":"10.1109/ICSME.2019.00026","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00026","url":null,"abstract":"The identical or nearly similar code fragments in a code-base are called code clones. There is a common belief that code cloning (copy/pasting code fragments) can introduce bugs in a software system if the copied code fragments are not properly adapted to their contexts (i.e., surrounding code). However, none of the existing studies have investigated whether such bugs are really present in code clones. We denote these bugs as Context Adaptation Bugs, or simply Context-Bugs, in our paper and investigate the extent to which they can be present in code clones. We define and automatically analyze two clone evolutionary patterns that indicate fixing of Context-Bugs. According to our analysis on thousands of revisions of six open-source subject systems written in Java, C, and C#, code cloning often introduces Context-Bugs in software systems. Around 50% of the clone related bug-fixes can occur for fixing Context-Bugs. Cloning (copy/pasting) a newly created code fragment (i.e., a code fragment that was not added in a former revision) is more likely to introduce Context-Bugs compared to cloning a preexisting fragment (i.e., a code fragment that was added in a former revision). Moreover, cloning across different files appears to have a significantly higher tendency of introducing Context-Bugs compared to cloning within the same file. Finally, Type 3 clones (gapped clones) have the highest tendency of containing Context-Bugs among the three major clone-types. Our findings can be important for early detection as well as removal of Context-Bugs in code clones.","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121114855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/ICSME.2019.00099
Keheliya Gallaba
Modern software is developed at a rapid pace. To sustain that rapid pace, organizations rely heavily on automated build, test, and release steps. To that end, Continuous Integration and Continuous Deployment (CI/CD) services take the incremental codebase changes that are produced by developers, compile them, link, and package them into software deliverables, verify their functionality, and deliver them to end users. While CI/CD processes provide mission-critical features, if they are misconfigured or poorly operated, the pace of development may be slowed or even halted. To prevent such issues, in this thesis, we set out to study and improve the robustness and efficiency of CI/CD. The thesis will include (1) conceptual contributions in the form of empirical studies of large samples of adopters of CI/CD tools to discover best practices and common limitations, as well as (2) technical contributions in the form of tools that support stakeholders to avoid common limitations (e.g., data misinterpretation issues, CI configuration mistakes).
{"title":"Improving the Robustness and Efficiency of Continuous Integration and Deployment","authors":"Keheliya Gallaba","doi":"10.1109/ICSME.2019.00099","DOIUrl":"https://doi.org/10.1109/ICSME.2019.00099","url":null,"abstract":"Modern software is developed at a rapid pace. To sustain that rapid pace, organizations rely heavily on automated build, test, and release steps. To that end, Continuous Integration and Continuous Deployment (CI/CD) services take the incremental codebase changes that are produced by developers, compile them, link, and package them into software deliverables, verify their functionality, and deliver them to end users. While CI/CD processes provide mission-critical features, if they are misconfigured or poorly operated, the pace of development may be slowed or even halted. To prevent such issues, in this thesis, we set out to study and improve the robustness and efficiency of CI/CD. The thesis will include (1) conceptual contributions in the form of empirical studies of large samples of adopters of CI/CD tools to discover best practices and common limitations, as well as (2) technical contributions in the form of tools that support stakeholders to avoid common limitations (e.g., data misinterpretation issues, CI configuration mistakes).","PeriodicalId":106748,"journal":{"name":"2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126569857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}