首页 > 最新文献

Journal of Software-Evolution and Process最新文献

英文 中文
Leveraging Levy Flight and Greylag Goose Optimization for Enhanced Cross-Project Defect Prediction in Software Evolution 利用Levy Flight和Greylag Goose优化增强软件进化中的跨项目缺陷预测
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-03-24 DOI: 10.1002/smr.70013
Kripa Sekaran, Sherly Puspha Annabel Lawrence

The cross-project defect prediction (CPDP) in software applications is crucial to predict defects and ensure software quality. The performance of the traditional CPDP models is degraded due to the class imbalance issue between different projects and differences in the data distribution. To overcome these limitations, a novel approach is proposed named as Levy flight–enabled greylag goose optimized UniXcoder-based stacked defect predictor (LFGGO-USDP) for the prediction of cross-project defects in the software engineering. In this paper, 23 software projects are selected from diverse datasets such as PROMISE, ReLink, AEEEM, and NASA that are preprocessed for enhancing reliability and reducing class imbalance issues. The transformation model maps source and target projects that are present in the feature space for enhancing predictive performances. During feature selection, the LF mechanism is embedded with the GGO algorithm to localize the features in the source code for enhancing diversity and minimizing local optimum issues. The integration of UniXcoder-based stacked bidirectional long short-term memory (U-SBiLSTM) is implemented as a cross-project defect predictor. The UniXcoder model extracts semantic information for source code tokenization. Then, the output of UniXcoder is fed as input to SBiLSTM, and the SBiLSTM model is applied to determine the relationship between the source code. After that, the output of UniXcoder (which contains the semantic features) is integrated with the output of SBiLSTM (which contains the sequential and temporal dependencies). After concatenating these features, the particular information is selected by using an attention mechanism for categorizing defective and nondefective classes. The experimental investigations are performed to analyze the nondefective and defective cases in software projects and numerical validation is conducted by applying different evaluation models for analyzing the superiority. The proposed model achieved the highest defect prediction accuracy of 0.986 compared to other existing approaches that demonstrates the proposed model provided better prediction outcomes.

软件应用中的跨项目缺陷预测(CPDP)是预测缺陷和保证软件质量的关键。由于不同项目之间的类不平衡问题和数据分布的差异,传统的CPDP模型的性能下降。为了克服这些限制,提出了一种新的方法,称为Levy飞行灰雁优化基于unixcoder的堆叠缺陷预测器(LFGGO-USDP),用于预测软件工程中的跨项目缺陷。本文从PROMISE、ReLink、AEEEM和NASA等不同的数据集中选择23个软件项目进行预处理,以提高可靠性和减少类不平衡问题。转换模型映射出现在特征空间中的源项目和目标项目,以增强预测性能。在特征选择过程中,将LF机制嵌入到GGO算法中,对源代码中的特征进行局部定位,以增强多样性并最小化局部最优问题。基于unixcoder的堆叠双向长短期记忆(U-SBiLSTM)集成被实现为跨项目缺陷预测器。UniXcoder模型为源代码标记提取语义信息。然后,将UniXcoder的输出作为SBiLSTM的输入,并应用SBiLSTM模型确定源代码之间的关系。之后,UniXcoder的输出(包含语义特征)与SBiLSTM的输出(包含顺序和时间依赖关系)集成。在连接这些特征之后,通过使用注意机制对缺陷和非缺陷类进行分类来选择特定的信息。通过实验研究分析了软件项目中的无缺陷和缺陷情况,并应用不同的评价模型进行了数值验证,分析了评价模型的优越性。与其他现有方法相比,该模型的缺陷预测准确率最高,为0.986,表明该模型具有较好的预测效果。
{"title":"Leveraging Levy Flight and Greylag Goose Optimization for Enhanced Cross-Project Defect Prediction in Software Evolution","authors":"Kripa Sekaran,&nbsp;Sherly Puspha Annabel Lawrence","doi":"10.1002/smr.70013","DOIUrl":"https://doi.org/10.1002/smr.70013","url":null,"abstract":"<div>\u0000 \u0000 <p>The cross-project defect prediction (CPDP) in software applications is crucial to predict defects and ensure software quality. The performance of the traditional CPDP models is degraded due to the class imbalance issue between different projects and differences in the data distribution. To overcome these limitations, a novel approach is proposed named as Levy flight–enabled greylag goose optimized UniXcoder-based stacked defect predictor (LFGGO-USDP) for the prediction of cross-project defects in the software engineering. In this paper, 23 software projects are selected from diverse datasets such as PROMISE, ReLink, AEEEM, and NASA that are preprocessed for enhancing reliability and reducing class imbalance issues. The transformation model maps source and target projects that are present in the feature space for enhancing predictive performances. During feature selection, the LF mechanism is embedded with the GGO algorithm to localize the features in the source code for enhancing diversity and minimizing local optimum issues. The integration of UniXcoder-based stacked bidirectional long short-term memory (U-SBiLSTM) is implemented as a cross-project defect predictor. The UniXcoder model extracts semantic information for source code tokenization. Then, the output of UniXcoder is fed as input to SBiLSTM, and the SBiLSTM model is applied to determine the relationship between the source code. After that, the output of UniXcoder (which contains the semantic features) is integrated with the output of SBiLSTM (which contains the sequential and temporal dependencies). After concatenating these features, the particular information is selected by using an attention mechanism for categorizing defective and nondefective classes. The experimental investigations are performed to analyze the nondefective and defective cases in software projects and numerical validation is conducted by applying different evaluation models for analyzing the superiority. The proposed model achieved the highest defect prediction accuracy of 0.986 compared to other existing approaches that demonstrates the proposed model provided better prediction outcomes.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Perspective Review on Embedded Systems Quality: State of the Field, Challenges, and Research Directions 嵌入式系统质量的多视角综述:领域现状、挑战与研究方向
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-03-17 DOI: 10.1002/smr.70007
Müge Canpolat Şahin, Ayça Kolukisa Tarhan

The use of embedded systems has increased significantly over the last decade with the proliferation of Internet of Things technology, automotive and healthcare innovations and the use of smart home appliances and consumer electronics. With this increase, the need for higher quality embedded systems has increased. There are various guidelines and standards, such as ISO/IEC 9126 and ISO/IEC 25010, for product quality evaluation. However, these guidelines cannot be directly applied to embedded systems due to the nature of these systems. Applying traditional quality standards or guidelines on these systems without modification may degrade the performance of the system, increase memory usage or energy consumption, or affect other critical physical metrics adversely. Consequently, several models and approaches have either been introduced or have adopted existing guidelines to produce high-quality embedded systems. With this motivation, to understand the state of the art, and to identify the research directions in the field, we conducted a systematic literature review (SLR). In our research, we have investigated studies published from 1980 to 2024 and provided a comprehensive review of the scientific literature on quality models, quality attributes, employed practices, and the challenges, gaps, and pitfalls in the field.

在过去十年中,随着物联网技术、汽车和医疗保健创新以及智能家电和消费电子产品的普及,嵌入式系统的使用显著增加。随着这种增长,对高质量嵌入式系统的需求也在增加。有各种指南和标准,如ISO/IEC 9126和ISO/IEC 25010,用于产品质量评估。然而,由于嵌入式系统的性质,这些指导方针不能直接应用于这些系统。在这些系统上不加修改地应用传统的质量标准或指导方针可能会降低系统的性能,增加内存使用或能源消耗,或对其他关键物理指标产生不利影响。因此,已经引入了一些模型和方法,或者采用了现有的指导方针来生产高质量的嵌入式系统。带着这样的动机,为了了解该领域的现状,并确定该领域的研究方向,我们进行了系统的文献综述(SLR)。在我们的研究中,我们调查了1980年至2024年发表的研究,并对质量模型、质量属性、采用的实践以及该领域的挑战、差距和陷阱的科学文献进行了全面的回顾。
{"title":"A Multi-Perspective Review on Embedded Systems Quality: State of the Field, Challenges, and Research Directions","authors":"Müge Canpolat Şahin,&nbsp;Ayça Kolukisa Tarhan","doi":"10.1002/smr.70007","DOIUrl":"https://doi.org/10.1002/smr.70007","url":null,"abstract":"<div>\u0000 \u0000 <p>The use of embedded systems has increased significantly over the last decade with the proliferation of Internet of Things technology, automotive and healthcare innovations and the use of smart home appliances and consumer electronics. With this increase, the need for higher quality embedded systems has increased. There are various guidelines and standards, such as ISO/IEC 9126 and ISO/IEC 25010, for product quality evaluation. However, these guidelines cannot be directly applied to embedded systems due to the nature of these systems. Applying traditional quality standards or guidelines on these systems without modification may degrade the performance of the system, increase memory usage or energy consumption, or affect other critical physical metrics adversely. Consequently, several models and approaches have either been introduced or have adopted existing guidelines to produce high-quality embedded systems. With this motivation, to understand the state of the art, and to identify the research directions in the field, we conducted a systematic literature review (SLR). In our research, we have investigated studies published from 1980 to 2024 and provided a comprehensive review of the scientific literature on quality models, quality attributes, employed practices, and the challenges, gaps, and pitfalls in the field.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143632843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilabeled Emotions Classification in Software Engineering Text Using Convolutional Neural Networks and Word Embeddings 基于卷积神经网络和词嵌入的软件工程文本多标签情绪分类
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-03-11 DOI: 10.1002/smr.70010
Atif Ali Wagan, Shuaiyong Li

Effective collaboration among software developers relies heavily on their ability to communicate efficiently, with emotions playing a pivotal role in this process. Emotions are widely used in human decision-making, making automated tools for emotion classification within developer communication channels essential. These tools can enhance productivity and collaboration by increasing awareness of fellow developers' emotions. Previous approaches, such as HOMER, RAKEL, and EmoTxt, have been proposed to classify emotions in Stack Overflow and Jira datasets at a finer granularity. However, these tools face performance challenges. To address these limitations, we aim to enhance multilabeled emotion classification performance by leveraging TextCNN, word embeddings, and hyper-parameter optimization. We validate the performance of this method by comparing it with the best previous methods for emotion classification in software engineering text. This approach achieves an F1-Micro score of 84.6001% on the Jira dataset and 76.9366% on the Stack Overflow dataset, showing an improvement of 3.5001% and 8.6366%, respectively. This advancement underscores the potential of this method in improving emotion classification performance, thereby fostering better collaboration and productivity among software developers.

软件开发人员之间的有效协作在很大程度上依赖于他们有效沟通的能力,而情感在这个过程中扮演着关键的角色。情绪在人类决策中被广泛使用,因此在开发人员沟通渠道中进行情绪分类的自动化工具是必不可少的。这些工具可以通过提高对其他开发人员情绪的认识来提高生产力和协作。以前的方法,如HOMER、RAKEL和EmoTxt,已经被提出以更细的粒度对Stack Overflow和Jira数据集中的情绪进行分类。然而,这些工具面临性能方面的挑战。为了解决这些限制,我们的目标是通过利用TextCNN、词嵌入和超参数优化来增强多标签情感分类性能。通过与已有的软件工程文本情感分类方法进行比较,验证了该方法的性能。该方法在Jira数据集和Stack Overflow数据集上分别取得了84.6001%和76.9366%的F1-Micro分数,分别提高了3.5001%和8.6366%。这一进步强调了这种方法在改善情感分类性能方面的潜力,从而促进了软件开发人员之间更好的协作和生产力。
{"title":"Multilabeled Emotions Classification in Software Engineering Text Using Convolutional Neural Networks and Word Embeddings","authors":"Atif Ali Wagan,&nbsp;Shuaiyong Li","doi":"10.1002/smr.70010","DOIUrl":"https://doi.org/10.1002/smr.70010","url":null,"abstract":"<div>\u0000 \u0000 <p>Effective collaboration among software developers relies heavily on their ability to communicate efficiently, with emotions playing a pivotal role in this process. Emotions are widely used in human decision-making, making automated tools for emotion classification within developer communication channels essential. These tools can enhance productivity and collaboration by increasing awareness of fellow developers' emotions. Previous approaches, such as HOMER, RAKEL, and EmoTxt, have been proposed to classify emotions in Stack Overflow and Jira datasets at a finer granularity. However, these tools face performance challenges. To address these limitations, we aim to enhance multilabeled emotion classification performance by leveraging TextCNN, word embeddings, and hyper-parameter optimization. We validate the performance of this method by comparing it with the best previous methods for emotion classification in software engineering text. This approach achieves an F1-Micro score of 84.6001% on the Jira dataset and 76.9366% on the Stack Overflow dataset, showing an improvement of 3.5001% and 8.6366%, respectively. This advancement underscores the potential of this method in improving emotion classification performance, thereby fostering better collaboration and productivity among software developers.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143595191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Dynamic and Static Techniques to Establish Traceability Links Between Production Code and Test Code on Python Projects: A Replication Study 使用动态和静态技术在Python项目的生产代码和测试代码之间建立可追溯性链接:复制研究
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-03-11 DOI: 10.1002/smr.70011
Zhifei Chen, Chiheng Jia, Yanhui Li, Lin Chen

The relationship between test code and production code, that is, test-to-code traceability, plays an essential role in the verification, reliability, and certification of software systems. Prior work on test-to-code traceability focuses mainly on Java. However, as Python allows more flexible testing styles, it is still unknown whether existing traceability approaches work well on Python projects. In order to address this gap in knowledge, this paper evaluates whether existing traceability approaches can accurately identify test-to-code links in Python projects. We collected seven popular Python projects and carried out an exploratory study at both the method and module levels (involving a total of 3198 test cases). On these projects, we evaluated 15 individual traceability techniques along with cross-level information propagation and four combining resolution strategies. The results reveal that the performance of test-to-code traceability approaches on Python has many differences with Java: (1) most of the existing techniques have poor effectiveness for Python; (2) after augmenting with cross-level information, the recall surprisingly drops; and (3) machine learning based combination approach achieves the best recall but the worst precision. These findings shed light on the best traceability approaches for Python projects, and also provide guidelines for researchers and the Python community.

测试代码和生产代码之间的关系,即测试到代码的可追溯性,在软件系统的验证、可靠性和认证中起着至关重要的作用。之前关于测试到代码可追溯性的工作主要集中在Java上。然而,由于Python允许更灵活的测试风格,现有的可追溯性方法是否适用于Python项目仍然是未知的。为了解决这方面的知识差距,本文评估了现有的可追溯性方法是否可以准确地识别Python项目中的测试到代码链接。我们收集了7个流行的Python项目,并在方法和模块级别进行了探索性研究(总共涉及3198个测试用例)。在这些项目中,我们评估了15种单独的可追溯性技术,以及跨层信息传播和4种组合解决策略。结果表明,Python上测试到代码跟踪方法的性能与Java存在许多差异:(1)大多数现有技术对Python的有效性较差;(2)跨层信息增强后,召回率显著下降;(3)基于机器学习的组合方法获得了最好的召回率,但精度最差。这些发现揭示了Python项目的最佳可追溯性方法,也为研究人员和Python社区提供了指导方针。
{"title":"Using Dynamic and Static Techniques to Establish Traceability Links Between Production Code and Test Code on Python Projects: A Replication Study","authors":"Zhifei Chen,&nbsp;Chiheng Jia,&nbsp;Yanhui Li,&nbsp;Lin Chen","doi":"10.1002/smr.70011","DOIUrl":"https://doi.org/10.1002/smr.70011","url":null,"abstract":"<div>\u0000 \u0000 <p>The relationship between test code and production code, that is, test-to-code traceability, plays an essential role in the verification, reliability, and certification of software systems. Prior work on test-to-code traceability focuses mainly on Java. However, as Python allows more flexible testing styles, it is still unknown whether existing traceability approaches work well on Python projects. In order to address this gap in knowledge, this paper evaluates whether existing traceability approaches can accurately identify test-to-code links in Python projects. We collected seven popular Python projects and carried out an exploratory study at both the method and module levels (involving a total of 3198 test cases). On these projects, we evaluated 15 individual traceability techniques along with cross-level information propagation and four combining resolution strategies. The results reveal that the performance of test-to-code traceability approaches on Python has many differences with Java: (1) most of the existing techniques have poor effectiveness for Python; (2) after augmenting with cross-level information, the recall surprisingly drops; and (3) machine learning based combination approach achieves the best recall but the worst precision. These findings shed light on the best traceability approaches for Python projects, and also provide guidelines for researchers and the Python community.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143595190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial for the Special Issue on Source Code Analysis and Manipulation, SCAM 2022 关于源代码分析和操作的特刊的客座社论,骗局2022
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-03-05 DOI: 10.1002/smr.70006
Banani Roy, Mohammad Ghafari, Mariano Ceccato
<p>This issue of the <i>Journal of Software:Evolution and Process</i> focuses on the foundation of software engineering—the source code itself. While much of the software engineering community properly emphasizes aspects like specification, design, and requirements engineering, the source code provides the only precise description of a system's behavior. Therefore, the analysis and manipulation of source code remain critical concerns.</p><p>This issue contains, among others, the extended version of the best papers presented at the 22nd IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2022) held in Limassol Cyprus, in October 2022.</p><p>The SCAM Conference aims to bring together researchers and practitioners working on theory, techniques, and applications that concern analysis and/or manipulation of the source code of software systems. The term <i>“source code”</i> refers to any fully executable description of a software system, such as machine code, (very) high-level languages, and executable graphical representations of systems. The term <i>“analysis”</i> refers to any (semi)automated procedure that yields insight into source code, while <i>“manipulation”</i> refers to any automated or semi-automated procedure that takes and returns source code. While much attention in the wider software engineering community is directed towards other aspects of systems development and evolution, such as specification, design, and requirements engineering, it is the source code that contains the only precise description of the behavior of a system. Hence, the analysis and manipulation of source code remains a pressing concern for which SCAM 2022 solicited high-quality paper submissions.</p><p>The SCAM 2022 conference received a total of 73 submissions. There were 45 submissions to the main research track, of which one was desk rejected for violation of the double-blind policy. The remaining 44 submissions went through a thorough review process. Every paper was fully reviewed by three or more program committee members for relevance, soundness and originality and discussed before final decisions were made. The program committee decided to accept 17 papers (acceptance rate 39%). The Engineering track has received 11 submissions, desk rejected one and accepted 5, the NIER track has received 16 submissions and accepted 10, and finally, the RENE track has received 4 submissions and accepted 1.</p><p>A public open call was published to invite outstanding papers by other authors on source code analysis and manipulation. In total, 10 papers were submitted to this special issue. Each of the submissions was reviewed by a minimum of three expert referees. Following the first round of review, the authors were asked to revise their papers in response to the referees' comments, and the revised drafts were then reviewed for conformance to the referees' comments. Among those, only five papers were selected for publication in this special issu
本期《软件杂志:进化与过程》关注的是软件工程的基础——源代码本身。虽然许多软件工程社区适当地强调规范、设计和需求工程等方面,但源代码提供了对系统行为的唯一精确描述。因此,源代码的分析和操作仍然是关键问题。除其他外,本期包含2022年10月在塞浦路斯利马索尔举行的第22届IEEE源代码分析与操纵国际工作会议(SCAM 2022)上发表的最佳论文的扩展版本。骗局会议的目的是汇集研究人员和实践者的理论,技术和应用,有关分析和/或操作的软件系统的源代码。术语“源代码”指的是软件系统的任何完全可执行的描述,例如机器码、(非常)高级语言和系统的可执行图形表示。术语“分析”指的是能够深入了解源代码的任何(半)自动化过程,而“操作”指的是获取并返回源代码的任何自动化或半自动化过程。虽然在更广泛的软件工程社区中,许多注意力都集中在系统开发和进化的其他方面,例如规格说明、设计和需求工程,但是源代码包含了对系统行为的唯一精确描述。因此,对源代码的分析和操作仍然是一个迫切需要关注的问题,因此2022年将征集高质量的论文提交。2022年会议共收到73份提案。有45份提交给主要研究轨道,其中一份因违反双盲政策而被拒绝。其余44份意见书经过了彻底的审查程序。每篇论文都由三个或更多的项目委员会成员全面审查,以确定其相关性、可靠性和原创性,并在做出最终决定之前进行讨论。计划委员会决定录用17篇论文(录取率39%)。工程轨道收到11份投稿,办公桌拒绝1份,接受5份;NIER轨道收到16份投稿,接受10份;最后,RENE轨道收到4份投稿,接受1份。公开邀请其他作者发表关于源代码分析和操作的优秀论文。本期特刊共收到10篇论文。每一份提交的作品都由至少三名专家评审。在第一轮评审之后,作者被要求根据审稿人的意见修改他们的论文,然后审查修改后的草稿是否符合审稿人的意见。其中只有5篇论文入选本期特刊。所选的论文代表了在SCAM上出现的一些最好的工作,涵盖了所有主要的兴趣领域,即重构由Yang Zhang和Shuai Hong以及Richárd Szalay和Zoltán Porkoláb;Hugo Andrade、jo<e:1> o Bispo和Filipe F. Correia的设计模式检测;Luca Negrini, Vincenzo Arceri, Agostino Cortesi和Pietro Ferrara的字符串分析;以及Francesco Altiero、Anna Corazza、Sergio Di Martino、Adriano Peron和Luigi Libero Lucio Starace的回归测试。在第一篇论文“ReInstancer: a Automatic Refactoring Approach for Instanceof Pattern Matching”中,Zhang等人介绍了ReInstancer,这是一种通过将多分支语句优化为switch表达式来实现Instanceof模式匹配自动化重构的工具,提高了代码质量和可读性。它通过重构20个实际项目中的7700多个实例证明了其有效性。Szalay等人的论文《Refactoring to Standard c++ 20 Modules》提出了一种半自动的方法,通过依赖分析和聚类将元素组织成模块,将现有的c++项目模块化。研究表明,升级到c++ 20模块受到项目现有架构设计的限制。在第三篇论文“设计模式实例的多语言检测”中,Andrade等人提出了DP-LARA,这是一种多语言模式检测工具,它利用LARA框架的虚拟抽象语法树(AST)来识别跨面向对象编程语言的设计模式。它支持与语言无关的代码分析,以提高软件的理解能力。Negrini等人的论文“Tarsis:一个有效的基于自动机的字符串分析抽象域”提出了一个基于有限状态自动机的字符串值抽象域,它优于字符串分析的基线,这是源代码分析的典型任务。Altiero等人在上一篇论文“利用源代码与树核相似度的回归测试优先级”中。 介绍两种新的回归测试优先级(RTP)技术,它们将树核应用于源代码的抽象语法树,以测量结构变化并相应地对测试进行优先级排序。通过对五个Java项目的评估,与传统的RTP方法相比,所提出的方法实现了更高的故障检测率。我们希望您发现这些论文引人入胜,并鼓励那些有兴趣的人加入我们在未来的骗局会议。
{"title":"Guest Editorial for the Special Issue on Source Code Analysis and Manipulation, SCAM 2022","authors":"Banani Roy,&nbsp;Mohammad Ghafari,&nbsp;Mariano Ceccato","doi":"10.1002/smr.70006","DOIUrl":"https://doi.org/10.1002/smr.70006","url":null,"abstract":"&lt;p&gt;This issue of the &lt;i&gt;Journal of Software:Evolution and Process&lt;/i&gt; focuses on the foundation of software engineering—the source code itself. While much of the software engineering community properly emphasizes aspects like specification, design, and requirements engineering, the source code provides the only precise description of a system's behavior. Therefore, the analysis and manipulation of source code remain critical concerns.&lt;/p&gt;&lt;p&gt;This issue contains, among others, the extended version of the best papers presented at the 22nd IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2022) held in Limassol Cyprus, in October 2022.&lt;/p&gt;&lt;p&gt;The SCAM Conference aims to bring together researchers and practitioners working on theory, techniques, and applications that concern analysis and/or manipulation of the source code of software systems. The term &lt;i&gt;“source code”&lt;/i&gt; refers to any fully executable description of a software system, such as machine code, (very) high-level languages, and executable graphical representations of systems. The term &lt;i&gt;“analysis”&lt;/i&gt; refers to any (semi)automated procedure that yields insight into source code, while &lt;i&gt;“manipulation”&lt;/i&gt; refers to any automated or semi-automated procedure that takes and returns source code. While much attention in the wider software engineering community is directed towards other aspects of systems development and evolution, such as specification, design, and requirements engineering, it is the source code that contains the only precise description of the behavior of a system. Hence, the analysis and manipulation of source code remains a pressing concern for which SCAM 2022 solicited high-quality paper submissions.&lt;/p&gt;&lt;p&gt;The SCAM 2022 conference received a total of 73 submissions. There were 45 submissions to the main research track, of which one was desk rejected for violation of the double-blind policy. The remaining 44 submissions went through a thorough review process. Every paper was fully reviewed by three or more program committee members for relevance, soundness and originality and discussed before final decisions were made. The program committee decided to accept 17 papers (acceptance rate 39%). The Engineering track has received 11 submissions, desk rejected one and accepted 5, the NIER track has received 16 submissions and accepted 10, and finally, the RENE track has received 4 submissions and accepted 1.&lt;/p&gt;&lt;p&gt;A public open call was published to invite outstanding papers by other authors on source code analysis and manipulation. In total, 10 papers were submitted to this special issue. Each of the submissions was reviewed by a minimum of three expert referees. Following the first round of review, the authors were asked to revise their papers in response to the referees' comments, and the revised drafts were then reviewed for conformance to the referees' comments. Among those, only five papers were selected for publication in this special issu","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.70006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143554530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BET-BiLSTM Model: A Robust Solution for Automated Requirements Classification BET-BiLSTM模型:自动化需求分类的鲁棒解决方案
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-03-05 DOI: 10.1002/smr.70012
Jalil Abbas, Cheng Zhang, Bin Luo

Transformer methods have revolutionized software requirements classification by combining advanced natural language processing to accurately understand and categorize requirements. While traditional methods like Doc2Vec and TF-IDF are useful, they often fail to capture the deep contextual relationships and subtle meanings inherent in textual data. Transformer models possess unique strengths and weaknesses, impacting their ability to capture various aspects of the data. Consequently, relying on a single model can lead to suboptimal feature representations, limiting the overall performance of the classification task. To address this challenge, our study introduces an innovative BET-BiLSTM (balanced ensemble transformers using Bi-LSTM) model. This model combines the strengths of five transformer–based models BERT, RoBERTa, XLNet, GPT-2, and T5 through weighted averaging ensemble, resulting in a sophisticated and resilient feature set. By employing data balancing techniques, we ensure a well-distributed representation of features, addressing the issue of class imbalance. The BET-BiLSTM model plays a crucial role in the classification process, achieving an impressive accuracy of 96%. Moreover, the practical applicability of this model is validated through its successful implementation on three publicly available unlabeled datasets and one additional labeled dataset. The model significantly improved the completeness and reliability of these datasets by accurately predicting labels for previously unclassified requirements. This makes our approach a powerful tool for large-scale requirements analysis and classification tasks, outperforming traditional single-model methods and showcasing its real-world effectiveness.

Transformer方法通过结合先进的自然语言处理来准确地理解和分类需求,从而彻底改变了软件需求分类。虽然Doc2Vec和TF-IDF等传统方法很有用,但它们往往无法捕捉文本数据中深层的上下文关系和微妙的含义。Transformer模型具有独特的优点和缺点,这影响了它们捕获数据各个方面的能力。因此,依赖单一模型可能导致次优特征表示,从而限制分类任务的整体性能。为了应对这一挑战,我们的研究引入了一种创新的BET-BiLSTM(使用Bi-LSTM的平衡集成变压器)模型。该模型结合了BERT、RoBERTa、XLNet、GPT-2和T5五个基于变压器的模型的优势,通过加权平均集成,形成了一个复杂而有弹性的特征集。通过采用数据平衡技术,我们确保了特征的良好分布表示,解决了类不平衡的问题。BET-BiLSTM模型在分类过程中起着至关重要的作用,达到了令人印象深刻的96%的准确率。此外,通过在三个公开可用的未标记数据集和一个附加标记数据集上的成功实现,验证了该模型的实际适用性。该模型通过准确预测以前未分类需求的标签,显著提高了这些数据集的完整性和可靠性。这使得我们的方法成为大规模需求分析和分类任务的强大工具,优于传统的单模型方法,并展示了其在现实世界中的有效性。
{"title":"BET-BiLSTM Model: A Robust Solution for Automated Requirements Classification","authors":"Jalil Abbas,&nbsp;Cheng Zhang,&nbsp;Bin Luo","doi":"10.1002/smr.70012","DOIUrl":"https://doi.org/10.1002/smr.70012","url":null,"abstract":"<div>\u0000 \u0000 <p>Transformer methods have revolutionized software requirements classification by combining advanced natural language processing to accurately understand and categorize requirements. While traditional methods like Doc2Vec and TF-IDF are useful, they often fail to capture the deep contextual relationships and subtle meanings inherent in textual data. Transformer models possess unique strengths and weaknesses, impacting their ability to capture various aspects of the data. Consequently, relying on a single model can lead to suboptimal feature representations, limiting the overall performance of the classification task. To address this challenge, our study introduces an innovative BET-BiLSTM (balanced ensemble transformers using Bi-LSTM) model. This model combines the strengths of five transformer–based models BERT, RoBERTa, XLNet, GPT-2, and T5 through weighted averaging ensemble, resulting in a sophisticated and resilient feature set. By employing data balancing techniques, we ensure a well-distributed representation of features, addressing the issue of class imbalance. The BET-BiLSTM model plays a crucial role in the classification process, achieving an impressive accuracy of 96%. Moreover, the practical applicability of this model is validated through its successful implementation on three publicly available unlabeled datasets and one additional labeled dataset. The model significantly improved the completeness and reliability of these datasets by accurately predicting labels for previously unclassified requirements. This makes our approach a powerful tool for large-scale requirements analysis and classification tasks, outperforming traditional single-model methods and showcasing its real-world effectiveness.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143554528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why Not Fix This Bug? Characterizing and Identifying Bug-Tagged Issues That Are Truly Fixed by Developers 为什么不修复这个错误?定性和识别开发人员真正修复的错误标记问题
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-25 DOI: 10.1002/smr.70008
Ye Wang, Zhengru Han, Qiao Huang, Bo Jiang

The GitHub issue community serves as the primary means for project developers to obtain information about program bugs, and numerous GitHub users post issues based on encountered project vulnerabilities or error messages. However, these issues often vary in quality, leading to a significant time burden on project developers. By collecting 2500 bug-related issues from five GitHub projects, we first manually analyze a large volume of issue information to formulate rules for identifying whether a bug-tagged issue is truly fixed by project developers. We find that a substantial number (ranging from 29% to 68.4% in different projects) of bug-tagged issues are not truly fixed by project developers. We empirically investigate the characteristics of such issues and summarize the reasons why they are not fixed. Then, we propose an automated approach called DFBERT to identify the bug-tagged issues that are more likely to be fixed by project developers. Our approach incorporates both text and non-text features to train a neural network-based prediction model. The experimental results show that our approach achieves an average F1-score of 0.66 in inter-project setting, and the F1-score increase to 0.77 when adding part of testing data for training.

GitHub问题社区是项目开发人员获取程序错误信息的主要途径,许多GitHub用户根据遇到的项目漏洞或错误消息发布问题。然而,这些问题的质量往往各不相同,导致项目开发人员的时间负担很大。通过从五个GitHub项目中收集2500个bug相关的问题,我们首先手动分析大量的问题信息,制定规则来确定项目开发人员是否真正修复了带有bug标签的问题。我们发现有相当数量(在不同的项目中从29%到68.4%不等)的bug标记问题并没有被项目开发人员真正修复。我们实证研究了这些问题的特点,并总结了它们不固定的原因。然后,我们提出一种称为DFBERT的自动化方法来识别更有可能由项目开发人员修复的带有错误标记的问题。我们的方法结合了文本和非文本特征来训练基于神经网络的预测模型。实验结果表明,我们的方法在跨项目设置下的平均f1得分为0.66,当加入部分测试数据进行训练时,f1得分提高到0.77。
{"title":"Why Not Fix This Bug? Characterizing and Identifying Bug-Tagged Issues That Are Truly Fixed by Developers","authors":"Ye Wang,&nbsp;Zhengru Han,&nbsp;Qiao Huang,&nbsp;Bo Jiang","doi":"10.1002/smr.70008","DOIUrl":"https://doi.org/10.1002/smr.70008","url":null,"abstract":"<div>\u0000 \u0000 <p>The GitHub issue community serves as the primary means for project developers to obtain information about program bugs, and numerous GitHub users post issues based on encountered project vulnerabilities or error messages. However, these issues often vary in quality, leading to a significant time burden on project developers. By collecting 2500 bug-related issues from five GitHub projects, we first manually analyze a large volume of issue information to formulate rules for identifying whether a bug-tagged issue is truly fixed by project developers. We find that a substantial number (ranging from 29% to 68.4% in different projects) of bug-tagged issues are not truly fixed by project developers. We empirically investigate the characteristics of such issues and summarize the reasons why they are not fixed. Then, we propose an automated approach called DFBERT to identify the bug-tagged issues that are more likely to be fixed by project developers. Our approach incorporates both text and non-text features to train a neural network-based prediction model. The experimental results show that our approach achieves an average F1-score of 0.66 in inter-project setting, and the F1-score increase to 0.77 when adding part of testing data for training.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Quantum Privacy and Security by Design: Imperatives for Contemporary State-of-the-Art in Quantum Software Engineering 通过设计实现量子隐私和安全:当代量子软件工程的当务之急
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-23 DOI: 10.1002/smr.70005
Vita Santa Barletta, Danilo Caivano, Anibrata Pal, Michele Scalera, Manuel A. Serrano Martin

With the advent of Quantum Computing and its exponential research endeavors in the past couple of decades, we are looking at a Golden Era of Quantum Computing. We are transitioning into an age of Hybrid Classical-Quantum Computers, where the quantum computational resources are selectively harnessed for resource-intensive tasks. On the one hand, Quantum Computing promises immense future computational innovation, and it also comes with privacy and security challenges. To date, Privacy by Design (PbD) and Security by Design (SbD) frameworks and guidelines in the Quantum Software Engineering (QSE) domain are still nebulous, and there are no comprehensive studies on the same. In this study, therefore, we identify the current state-of-the-art in the relevant literature and investigate the principles of PbD and SbD in the domain of QSE. This is the first study to identify state-of-the-art Quantum PbD and Quantum SbD in QSE. Furthermore, we also identified the gaps in the current literature, which were extended into action points for a robust literature for Quantum PbD and SbD. We recognize the crucial role of researchers, academics, and professionals in the field of Quantum Computing and Software Engineering in conducting more empirical studies and shaping the future of PbD and SbD principles in QSE.

随着量子计算的出现及其在过去几十年中呈指数级增长的研究工作,我们正迎来量子计算的黄金时代。我们正在向经典-量子混合计算机时代过渡,在这个时代,量子计算资源被有选择性地用于资源密集型任务。一方面,量子计算为未来的计算创新带来了巨大的前景,同时也带来了隐私和安全方面的挑战。迄今为止,量子软件工程(QSE)领域的隐私设计(PbD)和安全设计(SbD)框架和指南仍然模糊不清,也没有相关的全面研究。因此,在本研究中,我们确定了当前相关文献中的最新研究成果,并调查了量子软件工程(QSE)领域中的 PbD 和 SbD 原则。这是第一项确定 QSE 中量子 PbD 和量子 SbD 最新进展的研究。此外,我们还发现了当前文献中存在的空白,并将其扩展为量子 PbD 和 SbD 文献的行动要点。我们认识到,量子计算和软件工程领域的研究人员、学者和专业人士在开展更多实证研究和塑造量子SE 中的 PbD 和 SbD 原则的未来方面起着至关重要的作用。
{"title":"Enabling Quantum Privacy and Security by Design: Imperatives for Contemporary State-of-the-Art in Quantum Software Engineering","authors":"Vita Santa Barletta,&nbsp;Danilo Caivano,&nbsp;Anibrata Pal,&nbsp;Michele Scalera,&nbsp;Manuel A. Serrano Martin","doi":"10.1002/smr.70005","DOIUrl":"https://doi.org/10.1002/smr.70005","url":null,"abstract":"<p>With the advent of Quantum Computing and its exponential research endeavors in the past couple of decades, we are looking at a Golden Era of Quantum Computing. We are transitioning into an age of Hybrid Classical-Quantum Computers, where the quantum computational resources are selectively harnessed for resource-intensive tasks. On the one hand, Quantum Computing promises immense future computational innovation, and it also comes with privacy and security challenges. To date, Privacy by Design (PbD) and Security by Design (SbD) frameworks and guidelines in the Quantum Software Engineering (QSE) domain are still nebulous, and there are no comprehensive studies on the same. In this study, therefore, we identify the current state-of-the-art in the relevant literature and investigate the principles of PbD and SbD in the domain of QSE. This is the first study to identify state-of-the-art Quantum PbD and Quantum SbD in QSE. Furthermore, we also identified the gaps in the current literature, which were extended into action points for a robust literature for Quantum PbD and SbD. We recognize the crucial role of researchers, academics, and professionals in the field of Quantum Computing and Software Engineering in conducting more empirical studies and shaping the future of PbD and SbD principles in QSE.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.70005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143475660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
cvrip: A Visual GUI Ripping Framework 一个可视化GUI撷取框架
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-23 DOI: 10.1002/smr.70009
Heji Huang, Ju Qian, Wenduo Jia, Yiming Jin

GUI ripping explores the graphical user interface of an application to build a model which can express the application behavior. The ripped GUI model is useful in various software engineering tasks. Traditional GUI ripping techniques depend on the underlying GUI frameworks to provide the GUI structure information. They are difficult to work across platforms or on nonnative applications where the GUI structure information cannot easily be obtained. This work introduces cvrip, a visual GUI ripping framework, to address the problem. cvrip visually analyzes the GUI screen for ripping and does not rely on the underlying GUI frameworks. We introduce many new techniques to enable efficient visual GUI ripping, for example, a YOLO v5-based model to detect executable widgets, a state recognition acceleration method for fast model updating, and several GUI exploration strategies taking the characteristics of imperfect visual analysis into account. Experiments are conducted to evaluate many technique choices in visual GUI ripping and compare the solution with the traditional style ripping. The results show that cvrip can get competitive exploration coverage compared to traditional approaches. This suggests visual GUI ripping is a direction worthy of more future studies.

GUI撷取是探索应用程序的图形用户界面,以建立可以表达应用程序行为的模型。撕裂的GUI模型在各种软件工程任务中都很有用。传统的GUI撷取技术依赖于底层GUI框架来提供GUI结构信息。它们很难跨平台或在GUI结构信息不容易获得的非本机应用程序上工作。本文介绍了一个可视化GUI撷取框架cvrip来解决这个问题。cvrip可视地分析GUI屏幕,不依赖于底层GUI框架。我们引入了许多新技术来实现高效的可视化GUI抓取,例如,基于YOLO v5的模型来检测可执行小部件,一种状态识别加速方法来快速更新模型,以及几种考虑到不完美视觉分析特征的GUI探索策略。通过实验对可视化图形用户界面中多种技术的选择进行了评价,并将其与传统的格式进行了比较。结果表明,与传统方法相比,cvrip可以获得具有竞争力的勘探覆盖范围。这表明可视化GUI撷取是一个值得进一步研究的方向。
{"title":"cvrip: A Visual GUI Ripping Framework","authors":"Heji Huang,&nbsp;Ju Qian,&nbsp;Wenduo Jia,&nbsp;Yiming Jin","doi":"10.1002/smr.70009","DOIUrl":"https://doi.org/10.1002/smr.70009","url":null,"abstract":"<div>\u0000 \u0000 <p>GUI ripping explores the graphical user interface of an application to build a model which can express the application behavior. The ripped GUI model is useful in various software engineering tasks. Traditional GUI ripping techniques depend on the underlying GUI frameworks to provide the GUI structure information. They are difficult to work across platforms or on nonnative applications where the GUI structure information cannot easily be obtained. This work introduces cvrip, a visual GUI ripping framework, to address the problem. cvrip visually analyzes the GUI screen for ripping and does not rely on the underlying GUI frameworks. We introduce many new techniques to enable efficient visual GUI ripping, for example, a YOLO v5-based model to detect executable widgets, a state recognition acceleration method for fast model updating, and several GUI exploration strategies taking the characteristics of imperfect visual analysis into account. Experiments are conducted to evaluate many technique choices in visual GUI ripping and compare the solution with the traditional style ripping. The results show that cvrip can get competitive exploration coverage compared to traditional approaches. This suggests visual GUI ripping is a direction worthy of more future studies.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143475661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Flexible Framework to Ensure Traceability, Consistency, and Propagation of KPIs Evolution 确保kpi演进的可追溯性、一致性和传播的灵活框架
IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-02-18 DOI: 10.1002/smr.70004
Eladio Domínguez, Beatriz Pérez, Ángel L. Rubio, María A. Zapata

Organizations use key performance indicators (KPIs) to assess the effectiveness and efficiency of their procedures and processes. In a world that is constantly evolving and hyperconnected via the internet, it is of great interest to analyze how changes (organizational, legal, technological or other) can lead to modifications in the KPIs involved. However, little attention has been paid to KPI evolution either in the scientific literature or in developed solutions. This paper presents A Flexible Framework for the Evolution, Consistency and Traceability of KPIs (AFFECTK) that aims at establishing the basis for suitable KPIs' evolution management. The feasibility of this proposal is demonstrated through a proof-of-concept developed using a reasoning tool based on Constraint Logic Programming. The framework is further evaluated, using real KPI case studies, to assess the functional suitability of our approach.

组织使用关键绩效指标(kpi)来评估其程序和过程的有效性和效率。在一个通过互联网不断发展和超连接的世界中,分析变化(组织、法律、技术或其他)如何导致相关kpi的修改是非常有趣的。然而,无论是在科学文献中还是在已开发的解决方案中,对KPI演变的关注都很少。本文提出了一个灵活的关键绩效指标演化、一致性和可追溯性框架(AFFECTK),旨在为合适的关键绩效指标演化管理建立基础。通过使用基于约束逻辑编程的推理工具开发的概念验证,证明了该建议的可行性。使用实际的KPI案例研究,对框架进行进一步评估,以评估我们方法的功能适用性。
{"title":"A Flexible Framework to Ensure Traceability, Consistency, and Propagation of KPIs Evolution","authors":"Eladio Domínguez,&nbsp;Beatriz Pérez,&nbsp;Ángel L. Rubio,&nbsp;María A. Zapata","doi":"10.1002/smr.70004","DOIUrl":"https://doi.org/10.1002/smr.70004","url":null,"abstract":"<div>\u0000 \u0000 <p>Organizations use key performance indicators (KPIs) to assess the effectiveness and efficiency of their procedures and processes. In a world that is constantly evolving and hyperconnected via the internet, it is of great interest to analyze how changes (organizational, legal, technological or other) can lead to modifications in the KPIs involved. However, little attention has been paid to KPI evolution either in the scientific literature or in developed solutions. This paper presents <i>A Flexible Framework for the Evolution, Consistency and Traceability of KPIs</i> (AFFECTK) that aims at establishing the basis for suitable KPIs' evolution management. The feasibility of this proposal is demonstrated through a proof-of-concept developed using a reasoning tool based on Constraint Logic Programming. The framework is further evaluated, using real KPI case studies, to assess the functional suitability of our approach.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143438830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Software-Evolution and Process
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1