This paper presents a comprehensive study on evaluating and predicting software maintainability, leveraging the ISO/IEC 25010 standard as a foundation for software quality assessment. The standard defines eight primary characteristics, including maintainability, which is further divided into subcharacteristics to enable a detailed assessment of software systems. In this context, the QualCode framework is proposed as an efficient solution based on ISO/IEC 25010 principles for calculating the maintainability metric, which involves utilizing an efficient combination of submetrics and harnessing machine learning techniques to enhance the precision of predictions. The QualCode system introduces a comprehensive data-driven and automated approach to software maintainability evaluation, allowing developers and quality assurance teams to gauge the modularity, reusability, analyzability, modifiability, and testability of their software products more effectively. Through an extensive evaluation of prediction models and comparative analyses with existing tools for a diverse set of Java projects, the findings highlight the superior performance of QualCode in predicting software maintainability, reinforcing its significance in the software engineering domain.
{"title":"QualCode: A Data-Driven Framework for Predicting Software Maintainability Based on ISO/IEC 25010","authors":"Elham Azhir , Morteza Zakeri , Yasaman Abedini , Mojtaba Mostafavi Ghahfarokhi","doi":"10.1016/j.scico.2025.103399","DOIUrl":"10.1016/j.scico.2025.103399","url":null,"abstract":"<div><div>This paper presents a comprehensive study on evaluating and predicting software maintainability, leveraging the ISO/IEC 25010 standard as a foundation for software quality assessment. The standard defines eight primary characteristics, including maintainability, which is further divided into subcharacteristics to enable a detailed assessment of software systems. In this context, the QualCode framework is proposed as an efficient solution based on ISO/IEC 25010 principles for calculating the maintainability metric, which involves utilizing an efficient combination of submetrics and harnessing machine learning techniques to enhance the precision of predictions. The QualCode system introduces a comprehensive data-driven and automated approach to software maintainability evaluation, allowing developers and quality assurance teams to gauge the modularity, reusability, analyzability, modifiability, and testability of their software products more effectively. Through an extensive evaluation of prediction models and comparative analyses with existing tools for a diverse set of Java projects, the findings highlight the superior performance of QualCode in predicting software maintainability, reinforcing its significance in the software engineering domain.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"250 ","pages":"Article 103399"},"PeriodicalIF":1.4,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Context: Lack of communication and coordination in a software development community can lead to short and long-term social problems. This can result in the misalignment of socio-technical congruence, understood as the disconnect between social and technical factors, which in turn leads to suboptimal decisions. The absence of adequate strategies to manage these problems, together with deficient organizational structures, favors the accumulation of social debt. Objective: This paper collects and analyzes studies related to the causes, effects, consequences, methods, patterns, domains, and prevention and management strategies of social debt in software development. While agile environments are included in the analysis, the overall focus covers a broader range of organizational and methodological contexts, including distributed, hybrid, and other team models. Method: A systematic literature review was conducted through a parameterized search in different databases. This allowed us to identify and filters 231 papers, of which 85 were considered relevant and 45 selected as primary studies. Results: The main socio-technical factors in which social debt is generated and exerts its impact were identified, along with a limited number of tools -mainly conceptual models and automated mechanisms- that facilitate its detection by defining potential causes affecting the well-being of the team and the companies. Conclusions: Based on the findings, it is important to further study other causes that allow identifying the presence of social debt, as well as the development of strategies to mitigate its effects on the social and emotional well-being of professionals.
{"title":"Social debt in software development environments: A systematic literature review","authors":"Eydy Suárez-Brieva , César Jésus Pardo Calvache , Ricardo Pérez-Castillo","doi":"10.1016/j.scico.2025.103396","DOIUrl":"10.1016/j.scico.2025.103396","url":null,"abstract":"<div><div>Context: Lack of communication and coordination in a software development community can lead to short and long-term social problems. This can result in the misalignment of socio-technical congruence, understood as the disconnect between social and technical factors, which in turn leads to suboptimal decisions. The absence of adequate strategies to manage these problems, together with deficient organizational structures, favors the accumulation of social debt. Objective: This paper collects and analyzes studies related to the causes, effects, consequences, methods, patterns, domains, and prevention and management strategies of social debt in software development. While agile environments are included in the analysis, the overall focus covers a broader range of organizational and methodological contexts, including distributed, hybrid, and other team models. Method: A systematic literature review was conducted through a parameterized search in different databases. This allowed us to identify and filters 231 papers, of which 85 were considered relevant and 45 selected as primary studies. Results: The main socio-technical factors in which social debt is generated and exerts its impact were identified, along with a limited number of tools -mainly conceptual models and automated mechanisms- that facilitate its detection by defining potential causes affecting the well-being of the team and the companies. Conclusions: Based on the findings, it is important to further study other causes that allow identifying the presence of social debt, as well as the development of strategies to mitigate its effects on the social and emotional well-being of professionals.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"249 ","pages":"Article 103396"},"PeriodicalIF":1.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heap-based memory vulnerabilities are critical to software security and reliability. The presence of these vulnerabilities is affected by various factors, including code coverage, the frequency of heap operations, and the order of execution. Current fuzzing solutions strive to effectively identify these vulnerabilities by employing static analysis or incorporating feedback on the sequence of heap operations. However, these solutions exhibit limited practical applicability and fail to comprehensively address the temporal and spatial dimensions of heap operations. In this paper, we propose a dedicated fuzzing technique called CtxFuzz that efficiently discovers heap-based temporal and spatial memory vulnerabilities without necessitating domain specific knowledge. CtxFuzz employs context heap operation sequences (CHOS) as a novel feedback mechanism to guide the fuzzing process. CHOS comprises sequences of heap operations, including allocation, deallocation, read, and write, that are associated with their corresponding heap memory addresses and identified within the current context during the execution of the target program. By doing so, CtxFuzz can explore more heap states and trigger more heap-based memory vulnerabilities, both temporal and spatial. We evaluate CtxFuzz on 9 real-world open-source programs and compare its performance against 7 state-of-the-art fuzzers. The results indicate that CtxFuzz outperforms most of these fuzzers in terms of discovering heap-based memory vulnerabilities. Furthermore, our experiments led to the identification of ten zero-day vulnerabilities (10 CVEs).
{"title":"CtxFuzz: Discovering heap-based memory vulnerabilities through context heap operation sequence guided fuzzing","authors":"Jiacheng Jiang , Cheng Wen , Zhiyuan Fu , Shengchao Qin","doi":"10.1016/j.scico.2025.103395","DOIUrl":"10.1016/j.scico.2025.103395","url":null,"abstract":"<div><div>Heap-based memory vulnerabilities are critical to software security and reliability. The presence of these vulnerabilities is affected by various factors, including code coverage, the frequency of heap operations, and the order of execution. Current fuzzing solutions strive to effectively identify these vulnerabilities by employing static analysis or incorporating feedback on the sequence of heap operations. However, these solutions exhibit limited practical applicability and fail to comprehensively address the temporal and spatial dimensions of heap operations. In this paper, we propose a dedicated fuzzing technique called <span>CtxFuzz</span> that efficiently discovers heap-based temporal and spatial memory vulnerabilities without necessitating domain specific knowledge. <span>CtxFuzz</span> employs context heap operation sequences (CHOS) as a novel feedback mechanism to guide the fuzzing process. CHOS comprises sequences of heap operations, including allocation, deallocation, read, and write, that are associated with their corresponding heap memory addresses and identified within the current context during the execution of the target program. By doing so, <span>CtxFuzz</span> can explore more heap states and trigger more heap-based memory vulnerabilities, both temporal and spatial. We evaluate <span>CtxFuzz</span> on 9 real-world open-source programs and compare its performance against 7 state-of-the-art fuzzers. The results indicate that <span>CtxFuzz</span> outperforms most of these fuzzers in terms of discovering heap-based memory vulnerabilities. Furthermore, our experiments led to the identification of ten zero-day vulnerabilities (10 CVEs).</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"249 ","pages":"Article 103395"},"PeriodicalIF":1.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional learning management systems often fail to sustain student motivation in computer science education, where consistent practice is crucial for skill development. This necessitates specialized tools that better support technical learning processes, foster engagement, and enable timely interventions for struggling students.
Objectives
This study introduces and evaluates a dashboard module integrated into peer code review activities within an "Object-Oriented Design Laboratory" course. The research analyzes comprehensive peer code review data collected through this dashboard to examine student review behaviors and identify at-risk students through behavioral indicators.
Method
A mixed-methods approach compared an experimental group (n = 75) using the assessment dashboard against a control group (n = 78) without dashboard access. Both cohorts received identical instructions over 17 weeks. Data collection included peer code review processes, programming exam results, and student surveys to analyze review behaviors and perceptions of the dashboard.
Findings
Students with dashboard access demonstrated significantly improved engagement and self-awareness in the peer code review activity, along with measurable performance gains. Instructors benefited from more efficient monitoring capabilities across peer code review activities.
Implications
Dashboard systems enhance metacognitive awareness through self-reflection and interaction monitoring, showing particular promise for programming education. These systems improve learning outcomes and instructional effectiveness by providing visual feedback that helps students track progress and identify improvement areas while giving instructors valuable insights into engagement patterns and learning behaviors.
{"title":"Analyzing student perceptions and behaviors in the use of an engaging visualization dashboard for peer code review activities","authors":"Hoang-Thanh Duong , Chuan-Lin Huang , Bao-An Nguyen , Hsi-Min Chen","doi":"10.1016/j.scico.2025.103394","DOIUrl":"10.1016/j.scico.2025.103394","url":null,"abstract":"<div><h3>Background and context</h3><div>Traditional learning management systems often fail to sustain student motivation in computer science education, where consistent practice is crucial for skill development. This necessitates specialized tools that better support technical learning processes, foster engagement, and enable timely interventions for struggling students.</div></div><div><h3>Objectives</h3><div>This study introduces and evaluates a dashboard module integrated into peer code review activities within an \"Object-Oriented Design Laboratory\" course. The research analyzes comprehensive peer code review data collected through this dashboard to examine student review behaviors and identify at-risk students through behavioral indicators.</div></div><div><h3>Method</h3><div>A mixed-methods approach compared an experimental group (<em>n</em> = 75) using the assessment dashboard against a control group (<em>n</em> = 78) without dashboard access. Both cohorts received identical instructions over 17 weeks. Data collection included peer code review processes, programming exam results, and student surveys to analyze review behaviors and perceptions of the dashboard.</div></div><div><h3>Findings</h3><div>Students with dashboard access demonstrated significantly improved engagement and self-awareness in the peer code review activity, along with measurable performance gains. Instructors benefited from more efficient monitoring capabilities across peer code review activities.</div></div><div><h3>Implications</h3><div>Dashboard systems enhance metacognitive awareness through self-reflection and interaction monitoring, showing particular promise for programming education. These systems improve learning outcomes and instructional effectiveness by providing visual feedback that helps students track progress and identify improvement areas while giving instructors valuable insights into engagement patterns and learning behaviors.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"249 ","pages":"Article 103394"},"PeriodicalIF":1.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145236532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-11-05DOI: 10.1016/j.scico.2025.103410
Guisheng Fan , Yuguo Liang , Longfei Zu , Huiqun Yu , Zijie Huang , Wentao Chen
In software development, developers create bug reports within an Issue Tracking System (ITS) to describe the cause, symptoms, severity, and other technical details of bugs. The ITS includes reports of both intrinsic bugs (i.e., those originating within the software itself) and extrinsic bugs (i.e., those arising from third-party dependencies). Although extrinsic bugs are not recorded in the Version Control System (VCS), they can still affect Just-In-Time (JIT) bug prediction models that rely on VCS-derived information.
Previous research has shown that excluding extrinsic bugs can significantly improve JIT bug prediction model’s performance. However, manually classifying intrinsic and extrinsic bugs is time-consuming and prone to errors. To address this issue, we propose a CAN model that integrates the local feature extraction capability of TextCNN with the nonlinear approximation advantage of the Kolmogorov-Arnold Network (KAN). Experiments on 1880 labeled data samples from the OpenStack project demonstrate that the CAN model outperforms benchmark models such as BERT and CodeBERT, achieving an accuracy of 0.7492 and an F1-score of 0.8072. By comparing datasets with and without source code, we find that incorporating source code information enhances model performance. Finally, using the Local Interpretable Model-agnostic Explanations (LIME), an explainable artificial intelligence technique, we identify that keywords such as “test” and “api” in bug reports significantly contribute to the prediction of extrinsic bugs.
{"title":"Automatic identification of extrinsic bug reports for just-in-time bug prediction","authors":"Guisheng Fan , Yuguo Liang , Longfei Zu , Huiqun Yu , Zijie Huang , Wentao Chen","doi":"10.1016/j.scico.2025.103410","DOIUrl":"10.1016/j.scico.2025.103410","url":null,"abstract":"<div><div>In software development, developers create bug reports within an Issue Tracking System (ITS) to describe the cause, symptoms, severity, and other technical details of bugs. The ITS includes reports of both intrinsic bugs (i.e., those originating within the software itself) and extrinsic bugs (i.e., those arising from third-party dependencies). Although extrinsic bugs are not recorded in the Version Control System (VCS), they can still affect Just-In-Time (JIT) bug prediction models that rely on VCS-derived information.</div><div>Previous research has shown that excluding extrinsic bugs can significantly improve JIT bug prediction model’s performance. However, manually classifying intrinsic and extrinsic bugs is time-consuming and prone to errors. To address this issue, we propose a CAN model that integrates the local feature extraction capability of TextCNN with the nonlinear approximation advantage of the Kolmogorov-Arnold Network (KAN). Experiments on 1880 labeled data samples from the OpenStack project demonstrate that the CAN model outperforms benchmark models such as BERT and CodeBERT, achieving an accuracy of 0.7492 and an F1-score of 0.8072. By comparing datasets with and without source code, we find that incorporating source code information enhances model performance. Finally, using the Local Interpretable Model-agnostic Explanations (LIME), an explainable artificial intelligence technique, we identify that keywords such as “test” and “api” in bug reports significantly contribute to the prediction of extrinsic bugs.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"249 ","pages":"Article 103410"},"PeriodicalIF":1.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-09-20DOI: 10.1016/j.scico.2025.103392
George Tsakalidis, Kostas Vergidis
Context
Business Process Management (BPM) plays a central role in helping organizations improve efficiency and service delivery, particularly in environments with rising demands and limited resources. Within this field, Business Process Redesign (BPR) has emerged as a way to rethink and restructure processes in response to continuous change. However, many existing BPR methodologies fall short—they lack methodological rigor and are often too narrowly tailored to specific industries or use cases.
Objectives
This study explores whether BPR methodologies are both systematically structured and broadly applicable across domains. It addresses three key questions: whether current approaches are methodologically grounded, whether they can be applied across diverse contexts, and what core elements are necessary to support both structure and generalizability in BPR design.
Methods
A systematic literature review (SLR) was conducted, applying an eight-step protocol to assess sixty-four primary BPR methodologies drawn from academic databases. Each methodology was evaluated against two sets of criteria: five indicators of systematic design (e.g., defined phases, interdependencies, evaluation checkpoints), and five indicators of generalizability (e.g., cross-domain adaptability, notation flexibility, heuristic support). A concept-centric synthesis was used to analyze the findings.
Results
Of the methodologies reviewed, thirty-eight demonstrated systematic features, while only eight met broader applicability standards. Only one methodology satisfied all ten criteria, revealing a notable gap in the field and a need for more balanced, reusable frameworks.
Conclusion
The study highlights a significant gap in the current BPR methodologies and presents the BPR Application Framework —a structured yet adaptable methodology that combines phase-based design with heuristic integration and notation-aware modeling. Compared with established references like BPM CBOK and Lean Six Sigma, it offers a clearer, more actionable path for practitioners and researchers seeking both rigor and flexibility in BPR.
{"title":"Systematicity and generalizability in business process redesign methodologies: A systematic literature review","authors":"George Tsakalidis, Kostas Vergidis","doi":"10.1016/j.scico.2025.103392","DOIUrl":"10.1016/j.scico.2025.103392","url":null,"abstract":"<div><h3>Context</h3><div>Business Process Management (BPM) plays a central role in helping organizations improve efficiency and service delivery, particularly in environments with rising demands and limited resources. Within this field, Business Process Redesign (BPR) has emerged as a way to rethink and restructure processes in response to continuous change. However, many existing BPR methodologies fall short—they lack methodological rigor and are often too narrowly tailored to specific industries or use cases.</div></div><div><h3>Objectives</h3><div>This study explores whether BPR methodologies are both systematically structured and broadly applicable across domains. It addresses three key questions: whether current approaches are methodologically grounded, whether they can be applied across diverse contexts, and what core elements are necessary to support both structure and generalizability in BPR design.</div></div><div><h3>Methods</h3><div>A systematic literature review (SLR) was conducted, applying an eight-step protocol to assess sixty-four primary BPR methodologies drawn from academic databases. Each methodology was evaluated against two sets of criteria: five indicators of systematic design (e.g., defined phases, interdependencies, evaluation checkpoints), and five indicators of generalizability (e.g., cross-domain adaptability, notation flexibility, heuristic support). A concept-centric synthesis was used to analyze the findings.</div></div><div><h3>Results</h3><div>Of the methodologies reviewed, thirty-eight demonstrated systematic features, while only eight met broader applicability standards. Only one methodology satisfied all ten criteria, revealing a notable gap in the field and a need for more balanced, reusable frameworks.</div></div><div><h3>Conclusion</h3><div>The study highlights a significant gap in the current BPR methodologies and presents the BPR Application Framework —a structured yet adaptable methodology that combines phase-based design with heuristic integration and notation-aware modeling. Compared with established references like BPM CBOK and Lean Six Sigma, it offers a clearer, more actionable path for practitioners and researchers seeking both rigor and flexibility in BPR.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"248 ","pages":"Article 103392"},"PeriodicalIF":1.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145157612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-07-08DOI: 10.1016/j.scico.2025.103360
Adrian Francalanza , Gerard Tabone , Frank Pfenning
This paper introduces Grits, a channel-based message-passing concurrent language based on the semi-axiomatic sequent calculus, a logical foundation underpinning intuitionistic session types. The language leverages modalities from adjoint logic to express a number of programming idioms such as broadcast communication and message cancellation. The Grits interpreter is developed using Go, and consists primarily of two components: a type-checker and an evaluator.
{"title":"Grits: A message-passing programming language based on the semi-axiomatic sequent calculus","authors":"Adrian Francalanza , Gerard Tabone , Frank Pfenning","doi":"10.1016/j.scico.2025.103360","DOIUrl":"10.1016/j.scico.2025.103360","url":null,"abstract":"<div><div>This paper introduces <span>Grits</span>, a channel-based message-passing concurrent language based on the semi-axiomatic sequent calculus, a logical foundation underpinning intuitionistic session types. The language leverages modalities from adjoint logic to express a number of programming idioms such as broadcast communication and message cancellation. The <span>Grits</span> interpreter is developed using Go, and consists primarily of two components: a type-checker and an evaluator.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"248 ","pages":"Article 103360"},"PeriodicalIF":1.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-08-19DOI: 10.1016/j.scico.2025.103383
Maryam Gholami , Jafar Habibi , Maziar Goudarzi
Decision-making in software architecture is complex and requires expertise across domains. A key challenge is balancing software quality attributes. Architectural patterns, as knowledge repositories, offer solutions to recurring design problems. Thus, a structured approach to selecting patterns based on quality requirements is essential.
This paper presents an approach to improve decision-making in selecting architectural patterns concerning software quality attributes. Our method helps architects choose suitable patterns to achieve desired quality outcomes. For new or evolving systems, it recommends patterns aligned with target attributes, while for existing systems, it suggests improvements to enhance architecture.
We use Case-Based Reasoning (CBR) to achieve this goal. Eight architectural patterns were selected as cases, and relevant features were identified using the Repertory Grid Technique (RGT), with feature extraction performed by five experts. By computing similarity between RGT vectors and CBR cases, our method predicts the most appropriate pattern. The proposed approach achieves 83 % accuracy, demonstrating its effectiveness.
{"title":"Enhancing decision-making for software architects: selecting appropriate architectural patterns based on quality attribute requirements","authors":"Maryam Gholami , Jafar Habibi , Maziar Goudarzi","doi":"10.1016/j.scico.2025.103383","DOIUrl":"10.1016/j.scico.2025.103383","url":null,"abstract":"<div><div>Decision-making in software architecture is complex and requires expertise across domains. A key challenge is balancing software quality attributes. Architectural patterns, as knowledge repositories, offer solutions to recurring design problems. Thus, a structured approach to selecting patterns based on quality requirements is essential.</div><div>This paper presents an approach to improve decision-making in selecting architectural patterns concerning software quality attributes. Our method helps architects choose suitable patterns to achieve desired quality outcomes. For new or evolving systems, it recommends patterns aligned with target attributes, while for existing systems, it suggests improvements to enhance architecture.</div><div>We use Case-Based Reasoning (CBR) to achieve this goal. Eight architectural patterns were selected as cases, and relevant features were identified using the Repertory Grid Technique (RGT), with feature extraction performed by five experts. By computing similarity between RGT vectors and CBR cases, our method predicts the most appropriate pattern. The proposed approach achieves 83 % accuracy, demonstrating its effectiveness.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"248 ","pages":"Article 103383"},"PeriodicalIF":1.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents enhanced schedulability analysis techniques for Adaptive Mixed-Criticality systems with Weakly-Hard constraints (AMC-WH), where the low-criticality (LO) task jobs can continue to execute when the system switches to high-criticality (HI) mode. Prior AMC-WH studies typically adopt the skip-over model, in which up to s out of m consecutive LO task deadlines may be missed without violating system constraints. These approaches evaluate the Worst-Case Response Times (WCRT) of LO tasks under a fixed job execution pattern. In contrast, this work introduces a novel schedulability analysis framework based on the more general -firm model, where each LO task must meet at least m out of any k consecutive deadlines. This extension allows for more flexible and configurable execution patterns for LO tasks after a mode transition, improving the adaptability of the system to varying operational conditions. Additionally, we propose an exact schedulability test for AMC-WH based on Response Time Analysis (RTA), which incorporates the -firm model to precisely analyze schedulability by dynamically managing LO task execution patterns post-mode switch. Comprehensive experimental evaluations confirm the effectiveness and practicality of the proposed tests. In particular, our approach achieves an 18% improvement in schedulability compared to the AMC-WH skip-over baseline, while also optimizing resource utilization. By leveraging the flexibility of the -firm model, our method supports a wide range of real-time applications with diverse tolerance levels for deadline misses, offering enhanced adaptability in LO task execution strategies.
{"title":"Exact and sufficient schedulability tests for adaptive weakly-hard real-time mixed-criticality systems","authors":"Hossein Rabbiun , Mahmoud Shirazi , Jamal Mohammadi","doi":"10.1016/j.scico.2025.103382","DOIUrl":"10.1016/j.scico.2025.103382","url":null,"abstract":"<div><div>This paper presents enhanced schedulability analysis techniques for Adaptive Mixed-Criticality systems with Weakly-Hard constraints (AMC-WH), where the low-criticality (LO) task jobs can continue to execute when the system switches to high-criticality (HI) mode. Prior AMC-WH studies typically adopt the skip-over model, in which up to <em>s</em> out of <em>m</em> consecutive LO task deadlines may be missed without violating system constraints. These approaches evaluate the Worst-Case Response Times (WCRT) of LO tasks under a fixed job execution pattern. In contrast, this work introduces a novel schedulability analysis framework based on the more general <span><math><mo>(</mo><mi>m</mi><mo>,</mo><mi>k</mi><mo>)</mo></math></span>-firm model, where each LO task must meet at least <em>m</em> out of any <em>k</em> consecutive deadlines. This extension allows for more flexible and configurable execution patterns for LO tasks after a mode transition, improving the adaptability of the system to varying operational conditions. Additionally, we propose an exact schedulability test for AMC-WH based on Response Time Analysis (RTA), which incorporates the <span><math><mo>(</mo><mi>m</mi><mo>,</mo><mi>k</mi><mo>)</mo></math></span>-firm model to precisely analyze schedulability by dynamically managing LO task execution patterns post-mode switch. Comprehensive experimental evaluations confirm the effectiveness and practicality of the proposed tests. In particular, our approach achieves an 18% improvement in schedulability compared to the AMC-WH skip-over baseline, while also optimizing resource utilization. By leveraging the flexibility of the <span><math><mo>(</mo><mi>m</mi><mo>,</mo><mi>k</mi><mo>)</mo></math></span>-firm model, our method supports a wide range of real-time applications with diverse tolerance levels for deadline misses, offering enhanced adaptability in LO task execution strategies.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"248 ","pages":"Article 103382"},"PeriodicalIF":1.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144830040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}