Managing software development projects is a complex endeavor due to the constant emergence of unforeseen events that deviate from initial expectations. A competent project leader is not just someone who follows the planned course but also adept at handling and minimizing inconveniences, ultimately striving to achieve results that align as closely as possible with the desired outcome. However, individuals involved in technological development often cling to familiar tools that have previously yielded positive outcomes, even when those tools may not be the best fit for the current project context. The Agile Manifesto has significantly transformed project management, infusing the discipline with a fresh perspective. Nevertheless, there remain several challenges to overcome. In this article, we aim to provide a guide that addresses these difficulties and minimizes their impact. We explore the selection of key factors that adequately describe a project's complexity, which can subsequently be used in conjunction with the Cynefin framework to categorize management strategies, techniques, and tools based on their applicability to specific complexities. Additionally, we offer insights on adapting project management approaches throughout the project life cycle in response to changes in reality, utilizing the dynamics outlined by the Cynefin framework. Finally, we present suitable strategies, techniques, and tools for agile project management based on the complexity context assigned by the Cynefin framework.
{"title":"Selection of agile project management approaches based on project complexity","authors":"Fernando Pinciroli","doi":"10.1002/smr.2716","DOIUrl":"10.1002/smr.2716","url":null,"abstract":"<p>Managing software development projects is a complex endeavor due to the constant emergence of unforeseen events that deviate from initial expectations. A competent project leader is not just someone who follows the planned course but also adept at handling and minimizing inconveniences, ultimately striving to achieve results that align as closely as possible with the desired outcome. However, individuals involved in technological development often cling to familiar tools that have previously yielded positive outcomes, even when those tools may not be the best fit for the current project context. The Agile Manifesto has significantly transformed project management, infusing the discipline with a fresh perspective. Nevertheless, there remain several challenges to overcome. In this article, we aim to provide a guide that addresses these difficulties and minimizes their impact. We explore the selection of key factors that adequately describe a project's complexity, which can subsequently be used in conjunction with the Cynefin framework to categorize management strategies, techniques, and tools based on their applicability to specific complexities. Additionally, we offer insights on adapting project management approaches throughout the project life cycle in response to changes in reality, utilizing the dynamics outlined by the Cynefin framework. Finally, we present suitable strategies, techniques, and tools for agile project management based on the complexity context assigned by the Cynefin framework.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 12","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Organizations creating software commonly utilize software development lifecycles (SDLCs) to structure development activities. Secure development lifecycles (SDLs) integrate into SDLCs, adding security or compliance activities. They are widely used and have been published by industry leaders and in literature. These SDLs, however, were mostly designed before or while cloud services and other hosted solutions became popular. Such offerings widen the provider's responsibilities, as they not only deliver software but operate and decommission it as well. SDLs, however, do not always account for this change. Security maturity models (SMMs) help to assess SDLs and identify improvements by introducing a baseline to compare against. Multiple of these models were created after the advent of hosted solutions and are more recent than commonly referenced SDLs. Recent SMMs and SDLs may therefore support hosted solutions better than older proposals do. This paper compares a set of current and historic SDLs and SMMs in order to review their support for hosted solutions, including how support has changed over time. Security, privacy, and support for small or agile organizations are considered, as all are relevant to hosted solutions. The SDLs analyzed include Microsoft's SDL, McGraw's Touchpoints, the Cisco's SDL, and Stackpole and Oksendahl's SDL2. The SMMs reviewed are OWASP's Software Assurance Maturity Model 2 and DevSecOps Maturity Model. To assess the support for hosted solutions, the security and privacy activities foreseen in each SDLC phase are compared, before organizational compatibility, activity relevance, and efficiency are assessed. The paper further demonstrates how organizations may select and adjust a suitable proposal. The analyzed proposals are found to not sufficiently support hosted solutions: Important SDLC phases, such as solution retirement, are not always sufficiently supported. Agile practices, such as working in sprints, and small organizations are often not sufficiently considered as well. Efficiency is found to vary based on the application context. A clear improvement trend from before the proliferation of hosted solutions cannot be identified. Future work is therefore found to be required.
{"title":"Evolution of secure development lifecycles and maturity models in the context of hosted solutions","authors":"Felix Lange, Immanuel Kunz","doi":"10.1002/smr.2711","DOIUrl":"10.1002/smr.2711","url":null,"abstract":"<p>Organizations creating software commonly utilize software development lifecycles (SDLCs) to structure development activities. Secure development lifecycles (SDLs) integrate into SDLCs, adding security or compliance activities. They are widely used and have been published by industry leaders and in literature. These SDLs, however, were mostly designed before or while <i>cloud services</i> and other <i>hosted solutions</i> became popular. Such offerings widen the provider's responsibilities, as they not only deliver software but operate and decommission it as well. SDLs, however, do not always account for this change. Security maturity models (SMMs) help to assess SDLs and identify improvements by introducing a baseline to compare against. Multiple of these models were created after the advent of hosted solutions and are more recent than commonly referenced SDLs. Recent SMMs and SDLs may therefore support hosted solutions better than older proposals do. This paper compares a set of current and historic SDLs and SMMs in order to review their support for hosted solutions, including how support has changed over time. Security, privacy, and support for small or agile organizations are considered, as all are relevant to hosted solutions. The SDLs analyzed include Microsoft's SDL, McGraw's Touchpoints, the Cisco's SDL, and Stackpole and Oksendahl's SDL<sup>2</sup>. The SMMs reviewed are OWASP's Software Assurance Maturity Model 2 and DevSecOps Maturity Model. To assess the support for hosted solutions, the security and privacy activities foreseen in each SDLC phase are compared, before organizational compatibility, activity relevance, and efficiency are assessed. The paper further demonstrates how organizations may select and adjust a suitable proposal. The analyzed proposals are found to not sufficiently support hosted solutions: Important SDLC phases, such as solution retirement, are not always sufficiently supported. Agile practices, such as working in sprints, and small organizations are often not sufficiently considered as well. Efficiency is found to vary based on the application context. A clear improvement trend from before the proliferation of hosted solutions cannot be identified. Future work is therefore found to be required.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 12","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.2711","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141867836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanting Chu, Pengcheng Zhang, Hai Dong, Yan Xiao, Shunhui Ji
The growing popularity of smart contracts in various areas, such as digital payments and the Internet of Things, has led to an increase in smart contract security challenges. Researchers have responded by developing vulnerability detection tools. However, the effectiveness of these tools is limited due to the lack of authentic smart contract vulnerability datasets to comprehensively assess their capacity for diverse vulnerabilities. This paper proposes a Deep Learning-based Smart contract vulnerability Generation approach (SGDL) to overcome this challenge. SGDL utilizes static analysis techniques to extract both syntactic and semantic information from the contracts. It then uses a classification technique to match injected vulnerabilities with contracts. A generative adversarial network is employed to generate smart contract vulnerability fragments, creating a diverse and authentic pool of fragments. The vulnerability fragments are then injected into the smart contracts using an abstract syntax tree to ensure their syntactic correctness. Our experimental results demonstrate that our method is more effective than existing vulnerability injection methods in evaluating the contract vulnerability detection capacity of existing detection tools. Overall, SGDL provides a comprehensive and innovative solution to address the critical issue of authentic and diverse smart contract vulnerability datasets.
{"title":"SGDL: Smart contract vulnerability generation via deep learning","authors":"Hanting Chu, Pengcheng Zhang, Hai Dong, Yan Xiao, Shunhui Ji","doi":"10.1002/smr.2712","DOIUrl":"10.1002/smr.2712","url":null,"abstract":"<p>The growing popularity of smart contracts in various areas, such as digital payments and the Internet of Things, has led to an increase in smart contract security challenges. Researchers have responded by developing vulnerability detection tools. However, the effectiveness of these tools is limited due to the lack of authentic smart contract vulnerability datasets to comprehensively assess their capacity for diverse vulnerabilities. This paper proposes a <span>D</span>eep <span>L</span>earning-based <span>S</span>mart contract vulnerability <span>G</span>eneration approach (SGDL) to overcome this challenge. SGDL utilizes static analysis techniques to extract both syntactic and semantic information from the contracts. It then uses a classification technique to match injected vulnerabilities with contracts. A generative adversarial network is employed to generate smart contract vulnerability fragments, creating a diverse and authentic pool of fragments. The vulnerability fragments are then injected into the smart contracts using an abstract syntax tree to ensure their syntactic correctness. Our experimental results demonstrate that our method is more effective than existing vulnerability injection methods in evaluating the contract vulnerability detection capacity of existing detection tools. Overall, SGDL provides a comprehensive and innovative solution to address the critical issue of authentic and diverse smart contract vulnerability datasets.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 12","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141741766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vitor de Campos, José Maria N. David, Victor Ströele, Regina Braga
Finding software developers with expertise in specific technologies that align with industry domains is an increasingly critical requirement. However, due to the ever-changing nature of the technology industry, locating these professionals has become a significant challenge for companies and institutions. This research presents a comprehensive overview of studies exploring suitable recommendation systems that can assist companies in addressing this pressing need. To conduct this study, we employ a hybrid systematic mapping approach with an initial number of 1,251 studies and a final selection of 21 studies. Our work focuses on collecting data on key technologies, methodologies, and data sets utilized in proposed recommendation systems, to design a new recommendation system that can effectively identify specialists capable of aligning specific technical knowledge with industry domains. The outcomes of this study include insights into the current research trends in this field, alongside a practical overview of considerations necessary for developing a recommendation system that successfully meets the criteria for aligning technical skills with industry domains. By following a hybrid systematic mapping methodology and presenting the outcomes in the form of insights, this research addresses the challenge of finding software developers with domain-specific expertise in a rapidly changing technology industry, laying the groundwork for aligning technical skills with industry domains.
{"title":"Aligning technical knowledge to an industry domain in global software development: A systematic mapping","authors":"Vitor de Campos, José Maria N. David, Victor Ströele, Regina Braga","doi":"10.1002/smr.2713","DOIUrl":"10.1002/smr.2713","url":null,"abstract":"<p>Finding software developers with expertise in specific technologies that align with industry domains is an increasingly critical requirement. However, due to the ever-changing nature of the technology industry, locating these professionals has become a significant challenge for companies and institutions. This research presents a comprehensive overview of studies exploring suitable recommendation systems that can assist companies in addressing this pressing need. To conduct this study, we employ a hybrid systematic mapping approach with an initial number of 1,251 studies and a final selection of 21 studies. Our work focuses on collecting data on key technologies, methodologies, and data sets utilized in proposed recommendation systems, to design a new recommendation system that can effectively identify specialists capable of aligning specific technical knowledge with industry domains. The outcomes of this study include insights into the current research trends in this field, alongside a practical overview of considerations necessary for developing a recommendation system that successfully meets the criteria for aligning technical skills with industry domains. By following a hybrid systematic mapping methodology and presenting the outcomes in the form of insights, this research addresses the challenge of finding software developers with domain-specific expertise in a rapidly changing technology industry, laying the groundwork for aligning technical skills with industry domains.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 12","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141646757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bohan Liu, He Zhang, Liming Dong, Zhiqi Wang, Shanshan Li
Software process simulation (SPS) has become an effective tool for software process management and improvement. However, its adoption in industry is less than what the research community expected due to the burden of measurement cost and the high demand for domain knowledge. The difficulty of extracting appropriate metrics with real data from process enactment is one of the great challenges. We aim to provide evidence-based support of the process metrics for software process (simulation) modeling. A systematic literature review was performed by extending our previous review series to draw a comprehensive understanding of the metrics for process modeling following our proposed ontology of metrics in SPS. We identify 131 process modeling studies that collectively involve 1975 raw metrics and classified them into 21 categories using the coding technique. We found product and process external metrics are not used frequently in SPS modeling while resource external metrics are widely used. We analyze the causal relationships between metrics. We find that the models exhibit significant diversity, as no pairwise relationship between metrics accounts for more than 10% SPS models. We identify 17 data issues may encounter in measurement and 10 coping strategies. The results of this study provide process modelers with an evidence-based reference of the identification and the use of metrics in SPS modeling and further contribute to the development of the body of knowledge on software metrics in the context of process modeling. Furthermore, this study is not limited to process simulation but can be extended to software process modeling, in general. Taking simulation metrics as standards and references can further motivate and guide software developers to improve the collection, governance, and application of process data in practice.
{"title":"Metrics for software process simulation modeling","authors":"Bohan Liu, He Zhang, Liming Dong, Zhiqi Wang, Shanshan Li","doi":"10.1002/smr.2676","DOIUrl":"10.1002/smr.2676","url":null,"abstract":"<p>Software process simulation (SPS) has become an effective tool for software process management and improvement. However, its adoption in industry is less than what the research community expected due to the burden of measurement cost and the high demand for domain knowledge. The difficulty of extracting appropriate metrics with real data from process enactment is one of the great challenges. We aim to provide evidence-based support of the process metrics for software process (simulation) modeling. A systematic literature review was performed by extending our previous review series to draw a comprehensive understanding of the metrics for process modeling following our proposed ontology of metrics in SPS. We identify 131 process modeling studies that collectively involve 1975 raw metrics and classified them into 21 categories using the coding technique. We found product and process external metrics are not used frequently in SPS modeling while resource external metrics are widely used. We analyze the causal relationships between metrics. We find that the models exhibit significant diversity, as no pairwise relationship between metrics accounts for more than 10% SPS models. We identify 17 data issues may encounter in measurement and 10 coping strategies. The results of this study provide process modelers with an evidence-based reference of the identification and the use of metrics in SPS modeling and further contribute to the development of the body of knowledge on software metrics in the context of process modeling. Furthermore, this study is not limited to process simulation but can be extended to software process modeling, in general. Taking simulation metrics as standards and references can further motivate and guide software developers to improve the collection, governance, and application of process data in practice.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141609839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a procedure for and evaluation of using a semantic similarity metric as a loss function for neural source code summarization. Code summarization is the task of writing natural language descriptions of source code. Neural code summarization refers to automated techniques for generating these descriptions using neural networks. Almost all current approaches involve neural networks as either standalone models or as part of a pretrained large language models, for example, GPT, Codex, and LLaMA. Yet almost all also use a categorical cross-entropy (CCE) loss function for network optimization. Two problems with CCE are that (1) it computes loss over each word prediction one-at-a-time, rather than evaluating a whole sentence, and (2) it requires a perfect prediction, leaving no room for partial credit for synonyms. In this paper, we extend our previous work on semantic similarity metrics to show a procedure for using semantic similarity as a loss function to alleviate this problem, and we evaluate this procedure in several settings in both metrics-driven and human studies. In essence, we propose to use a semantic similarity metric to calculate loss over the whole output sentence prediction per training batch, rather than just loss for each word. We also propose to combine our loss with CCE for each word, which streamlines the training process compared to baselines. We evaluate our approach over several baselines and report improvement in the vast majority of conditions.
{"title":"Semantic similarity loss for neural source code summarization","authors":"Chia-Yi Su, Collin McMillan","doi":"10.1002/smr.2706","DOIUrl":"10.1002/smr.2706","url":null,"abstract":"<p>This paper presents a procedure for and evaluation of using a semantic similarity metric as a loss function for neural source code summarization. Code summarization is the task of writing natural language descriptions of source code. Neural code summarization refers to automated techniques for generating these descriptions using neural networks. Almost all current approaches involve neural networks as either standalone models or as part of a pretrained large language models, for example, GPT, Codex, and LLaMA. Yet almost all also use a categorical cross-entropy (CCE) loss function for network optimization. Two problems with CCE are that (1) it computes loss over each word prediction one-at-a-time, rather than evaluating a whole sentence, and (2) it requires a perfect prediction, leaving no room for partial credit for synonyms. In this paper, we extend our previous work on semantic similarity metrics to show a procedure for using semantic similarity as a loss function to alleviate this problem, and we evaluate this procedure in several settings in both metrics-driven and human studies. In essence, we propose to use a semantic similarity metric to calculate loss over the whole output sentence prediction per training batch, rather than just loss for each word. We also propose to combine our loss with CCE for each word, which streamlines the training process compared to baselines. We evaluate our approach over several baselines and report improvement in the vast majority of conditions.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Testing is one of the most time-consuming and unpredictable processes within the software development life cycle. As a result, many test case optimization (TCO) techniques have been proposed to make this process more scalable. Object Constraint Language (OCL) was initially introduced as a constraint language to provide additional details to Unified Modeling Language models. However, as OCL continues to evolve, an increasing number of systems are being expressed by this language. Despite this growth, a noticeable research gap exists for the testing of systems whose specifications are expressed in OCL. In our previous work, we verified the effectiveness and efficiency of performing the test case prioritization (TCP) process for these systems. In this study, we extend our previous work by integrating the test case minimization (TCM) process to determine whether TCM can also benefit the testing process under the context of OCL. The evaluation of TCO approaches often relies on well-established metrics such as the average percentage of fault detection (APFD). However, the suitability of APFD for model-based testing (MBT) is not ideal. This paper addresses this limitation by proposing a modification to the APFD metric to enhance its viability for MBT scenarios. We conducted four case studies to evaluate the feasibility of integrating the TCM and TCP processes into our proposed approach. In these studies, we applied the multi-objective optimization algorithm NSGA-II and the genetic algorithm independently to the TCM and TCP processes. The objective was to assess the effectiveness and efficiency of combining TCM and TCP in enhancing the testing phase. Through experimental analysis, the results highlight the benefits of integrating TCM and TCP in the context of OCL-based testing, providing valuable insights for practitioners and researchers aiming to optimize their testing efforts. Specifically, the main contributions of this work include the following: (1) we introduce the integration of the TCM process into the TCO process for systems expressed by OCL. This integration benefits the testing process further by reducing redundant test cases while ensuring sufficient coverage. (2) We comprehensively analyze the limitations associated with the commonly used metric, APFD, and then, a modified version of the APFD metric has been proposed to overcome these weaknesses. (3). We systematically evaluate the effectiveness and efficiency of OCL-based TCO processes on four real-world case studies with different complexities.
{"title":"Object Constraint Language based test case optimization with modified Average Percentage of Fault Detection metric","authors":"Kunxiang Jin, Kevin Lano","doi":"10.1002/smr.2708","DOIUrl":"10.1002/smr.2708","url":null,"abstract":"<p>Testing is one of the most time-consuming and unpredictable processes within the software development life cycle. As a result, many test case optimization (TCO) techniques have been proposed to make this process more scalable. Object Constraint Language (OCL) was initially introduced as a constraint language to provide additional details to Unified Modeling Language models. However, as OCL continues to evolve, an increasing number of systems are being expressed by this language. Despite this growth, a noticeable research gap exists for the testing of systems whose specifications are expressed in OCL. In our previous work, we verified the effectiveness and efficiency of performing the test case prioritization (TCP) process for these systems. In this study, we extend our previous work by integrating the test case minimization (TCM) process to determine whether TCM can also benefit the testing process under the context of OCL. The evaluation of TCO approaches often relies on well-established metrics such as the average percentage of fault detection (APFD). However, the suitability of APFD for model-based testing (MBT) is not ideal. This paper addresses this limitation by proposing a modification to the APFD metric to enhance its viability for MBT scenarios. We conducted four case studies to evaluate the feasibility of integrating the TCM and TCP processes into our proposed approach. In these studies, we applied the multi-objective optimization algorithm NSGA-II and the genetic algorithm independently to the TCM and TCP processes. The objective was to assess the effectiveness and efficiency of combining TCM and TCP in enhancing the testing phase. Through experimental analysis, the results highlight the benefits of integrating TCM and TCP in the context of OCL-based testing, providing valuable insights for practitioners and researchers aiming to optimize their testing efforts. Specifically, the main contributions of this work include the following: (1) we introduce the integration of the TCM process into the TCO process for systems expressed by OCL. This integration benefits the testing process further by reducing redundant test cases while ensuring sufficient coverage. (2) We comprehensively analyze the limitations associated with the commonly used metric, APFD, and then, a modified version of the APFD metric has been proposed to overcome these weaknesses. (3). We systematically evaluate the effectiveness and efficiency of OCL-based TCO processes on four real-world case studies with different complexities.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.2708","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robotic Process Automation (RPA) is an emerging software technology for automating business processes. RPA uses software robots to perform repetitive and error-prone tasks previously done by human actors quickly and accurately. These robots mimic humans by interacting with existing software applications through user interfaces (UI). The goal of RPA is to relieve employees from repetitive and tedious tasks to increase productivity and to provide better service quality. Yet, despite all the RPA benefits, most organizations fail to adopt RPA. One of the main reasons for the lack of adoption is that organizations are unable to effectively identify the processes that are suitable for RPA. This paper proposes a new method, called Rule-based robotic process analysis (RRPA), that assists process automation practitioners to classify business processes according to their suitability for RPA. The RRPA method computes a suitability score for RPA using a combination of two RPA goals: (i) the RPA feasibility, which assesses the extent to which the process or the activity lends itself to automation with RPA and (ii) the RPA relevance, which assesses whether the RPA automation is worthwhile. We tested the RRPA method on a set of 13 processes. The results showed that the method is effective at 82.05% and efficient at 76.19%.
{"title":"A rule-based method to effectively adopt robotic process automation","authors":"Maxime Bédard, Abderrahmane Leshob, Imen Benzarti, Hafedh Mili, Raqeebir Rab, Omar Hussain","doi":"10.1002/smr.2709","DOIUrl":"10.1002/smr.2709","url":null,"abstract":"<p>Robotic Process Automation (RPA) is an emerging software technology for automating business processes. RPA uses software robots to perform repetitive and error-prone tasks previously done by human actors quickly and accurately. These robots mimic humans by interacting with existing software applications through user interfaces (UI). The goal of RPA is to relieve employees from repetitive and tedious tasks to increase productivity and to provide better service quality. Yet, despite all the RPA benefits, most organizations fail to adopt RPA. One of the main reasons for the lack of adoption is that organizations are unable to effectively identify the processes that are suitable for RPA. This paper proposes a new method, called Rule-based robotic process analysis (RRPA), that assists process automation practitioners to classify business processes according to their suitability for RPA. The RRPA method computes a suitability score for RPA using a combination of two RPA goals: (i) the RPA feasibility, which assesses the extent to which the process or the activity lends itself to automation with RPA and (ii) the RPA relevance, which assesses whether the RPA automation is worthwhile. We tested the RRPA method on a set of 13 processes. The results showed that the method is effective at 82.05% and efficient at 76.19%.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.2709","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gamification is an attractive strategy for different contexts, including software process improvement, where it presents positive results associated with increased factors such as motivation and others classified into social and human factors. Such factors are required to improve software processes in the automotive industry due to the influence of changes in the conditions and the behavior of individuals. However, the treatment of gamification strategies requires rigor at a scientific level. Therefore, it is necessary to analyze critical dimensions such as the gamification maturity level, the ability to intervene, and the influence of social and human factors. Such analysis is motivated by the relationship between social and human factors and the success of a process improvement. The above justifies the researchers' interest in this article in analyzing a gamification strategy implemented in the automotive industry from such dimensions. Therefore, this article presents the analysis from the point of view of developing software-controlled systems in automobiles. Besides, it uses a deductive approach to conduct this analysis to abstract all the design aspects of a strategy created and implemented in a software development automotive environment. One of the most representative findings of this study is the strategy's capacity to promote SHF, which identifies motivation, commitment, team cohesion, emotional intelligence, and autonomy.
{"title":"Promoting social and human factors through a gamified automotive software development environment","authors":"Gloria Piedad Gasca-Hurtado, Mirna Muñoz, Samer Sameh","doi":"10.1002/smr.2704","DOIUrl":"10.1002/smr.2704","url":null,"abstract":"<p>Gamification is an attractive strategy for different contexts, including software process improvement, where it presents positive results associated with increased factors such as motivation and others classified into social and human factors. Such factors are required to improve software processes in the automotive industry due to the influence of changes in the conditions and the behavior of individuals. However, the treatment of gamification strategies requires rigor at a scientific level. Therefore, it is necessary to analyze critical dimensions such as the gamification maturity level, the ability to intervene, and the influence of social and human factors. Such analysis is motivated by the relationship between social and human factors and the success of a process improvement. The above justifies the researchers' interest in this article in analyzing a gamification strategy implemented in the automotive industry from such dimensions. Therefore, this article presents the analysis from the point of view of developing software-controlled systems in automobiles. Besides, it uses a deductive approach to conduct this analysis to abstract all the design aspects of a strategy created and implemented in a software development automotive environment. One of the most representative findings of this study is the strategy's capacity to promote SHF, which identifies motivation, commitment, team cohesion, emotional intelligence, and autonomy.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaymae Miloudi, Laila Cheikhi, Ali Idri, Alain Abran
Software maintenance is a challenging and laborious software management activity, especially for open-source software. The bugs reports of such software allow tracking maintenance activities and were used in several empirical studies to better predict the bug resolution effort. These reports are known for their large size and contain nonrelevant instances that need to be preprocessed to be suitable for use. To this end, instance selection (IS) has been proposed in the literature as a way to reduce the size of the datasets, while keeping the relevant instances. The objective of this study is to perform an empirical study that investigates the impact of data preprocessing through IS on the performance of bug resolution prediction classifiers. To deal with this, four IS algorithms, namely, edited nearest neighbor (ENN), repeated ENN, all-k nearest neighbors, and model class selection, are applied on five large datasets, together with five machine learning techniques. Overall, 125 experiments were performed and compared. The findings of this study highlight the positive impact of IS in providing better estimates for bug resolution prediction classifiers, in particular using repeated ENN and ENN algorithms.
软件维护是一项具有挑战性且费力的软件管理活动,尤其是对于开源软件而言。此类软件的错误报告可用于跟踪维护活动,并被用于多项实证研究,以更好地预测错误解决工作。众所周知,这些报告体积庞大,包含的非相关实例需要经过预处理才能使用。为此,文献中提出了实例选择 (IS),以此来缩小数据集的大小,同时保留相关实例。本研究的目的是进行实证研究,探讨通过 IS 进行数据预处理对错误解决预测分类器性能的影响。为此,在五个大型数据集上应用了四种 IS 算法,即编辑最近邻(ENN)、重复ENN、全k 最近邻和模型类选择,以及五种机器学习技术。总共进行了 125 次实验和比较。本研究的结果凸显了 IS 在为错误解决预测分类器提供更好的估计方面的积极影响,特别是使用重复 ENN 和 ENN 算法。
{"title":"On the value of instance selection for bug resolution prediction performance","authors":"Chaymae Miloudi, Laila Cheikhi, Ali Idri, Alain Abran","doi":"10.1002/smr.2710","DOIUrl":"10.1002/smr.2710","url":null,"abstract":"<p>Software maintenance is a challenging and laborious software management activity, especially for open-source software. The bugs reports of such software allow tracking maintenance activities and were used in several empirical studies to better predict the bug resolution effort. These reports are known for their large size and contain nonrelevant instances that need to be preprocessed to be suitable for use. To this end, instance selection (IS) has been proposed in the literature as a way to reduce the size of the datasets, while keeping the relevant instances. The objective of this study is to perform an empirical study that investigates the impact of data preprocessing through IS on the performance of bug resolution prediction classifiers. To deal with this, four IS algorithms, namely, edited nearest neighbor (ENN), repeated ENN, all-k nearest neighbors, and model class selection, are applied on five large datasets, together with five machine learning techniques. Overall, 125 experiments were performed and compared. The findings of this study highlight the positive impact of IS in providing better estimates for bug resolution prediction classifiers, in particular using repeated ENN and ENN algorithms.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}