Pub Date : 2024-11-10DOI: 10.1016/j.infsof.2024.107625
Samuel Sepúlveda , Ricardo Pérez-Castillo , Mario Piattini
Context:
Quantum computing is rapidly emerging as a transformative force in technology. We will soon increasingly encounter hybrid systems that combine quantum technology with classical software. Software engineering techniques will be required to manage the complexity of designing such systems and their reuse.
Objective:
This paper introduces preliminary ideas concerning developing quantum–classical software using a Software Product Line approach.
Method:
This approach addresses the mentioned challenges and provides a feature model and a whole process to manage variability during the design and development of hybrid quantum–classical software. The usage of this approach is illustrated and discussed using an example in the logistics domain.
Results:
The preliminary insights show the feasibility and suitability of applying the proposed approach to develop complex quantum–classical software.
Conclusions:
The main implication of this research is that it can help to manage complexity, maximize the reuse of classical and quantum software components, and deal with the highly changing technological stack in the current quantum computing field.
{"title":"A software product line approach for developing hybrid software systems","authors":"Samuel Sepúlveda , Ricardo Pérez-Castillo , Mario Piattini","doi":"10.1016/j.infsof.2024.107625","DOIUrl":"10.1016/j.infsof.2024.107625","url":null,"abstract":"<div><h3>Context:</h3><div>Quantum computing is rapidly emerging as a transformative force in technology. We will soon increasingly encounter hybrid systems that combine quantum technology with classical software. Software engineering techniques will be required to manage the complexity of designing such systems and their reuse.</div></div><div><h3>Objective:</h3><div>This paper introduces preliminary ideas concerning developing quantum–classical software using a Software Product Line approach.</div></div><div><h3>Method:</h3><div>This approach addresses the mentioned challenges and provides a feature model and a whole process to manage variability during the design and development of hybrid quantum–classical software. The usage of this approach is illustrated and discussed using an example in the logistics domain.</div></div><div><h3>Results:</h3><div>The preliminary insights show the feasibility and suitability of applying the proposed approach to develop complex quantum–classical software.</div></div><div><h3>Conclusions:</h3><div>The main implication of this research is that it can help to manage complexity, maximize the reuse of classical and quantum software components, and deal with the highly changing technological stack in the current quantum computing field.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107625"},"PeriodicalIF":3.8,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-09DOI: 10.1016/j.infsof.2024.107624
Giovanna Broccia , Maurice H. ter Beek , Alberto Lluch Lafuente , Paola Spoletini , Alessandro Fantechi , Alessio Ferrari
Context:
Attack-Defense Trees (ADTs) are a graphical notation used to model and evaluate security requirements. ADTs are popular because they facilitate communication among different stakeholders involved in system security evaluation and are formal enough to be verified using methods like model checking. The understandability and user-friendliness of ADTs are claimed as key factors in their success, but these aspects, along with user acceptance, have not been evaluated empirically.
Objectives:
This paper presents an experiment with 25 subjects designed to assess the understandability and user acceptance of the ADT notation, along with an internal replication involving 49 subjects.
Methods:
The experiments adapt the Method Evaluation Model (MEM) to examine understandability variables (i.e., effectiveness and efficiency in using ADTs) and user acceptance variables (i.e., ease of use, usefulness, and intention to use). The MEM is also used to evaluate the relationships between these dimensions. In addition, a comparative analysis of the results of the two experiments is carried out.
Results:
With some minor differences, the outcomes of the two experiments are aligned. The results demonstrate that ADTs are well understood by participants, with values of understandability variables significantly above established thresholds. They are also highly appreciated, particularly for their ease of use. The results also show that users who are more effective in using the notation tend to evaluate it better in terms of usefulness.
Conclusion:
These studies provide empirical evidence supporting both the understandability and perceived acceptance of ADTs, thus encouraging further adoption of the notation in industrial contexts, and development of supporting tools.
{"title":"Evaluating the understandability and user acceptance of Attack-Defense Trees: Original experiment and replication","authors":"Giovanna Broccia , Maurice H. ter Beek , Alberto Lluch Lafuente , Paola Spoletini , Alessandro Fantechi , Alessio Ferrari","doi":"10.1016/j.infsof.2024.107624","DOIUrl":"10.1016/j.infsof.2024.107624","url":null,"abstract":"<div><h3>Context:</h3><div>Attack-Defense Trees (ADTs) are a graphical notation used to model and evaluate security requirements. ADTs are popular because they facilitate communication among different stakeholders involved in system security evaluation and are formal enough to be verified using methods like model checking. The understandability and user-friendliness of ADTs are claimed as key factors in their success, but these aspects, along with user acceptance, have not been evaluated empirically.</div></div><div><h3>Objectives:</h3><div>This paper presents an experiment with 25 subjects designed to assess the understandability and user acceptance of the ADT notation, along with an internal replication involving 49 subjects.</div></div><div><h3>Methods:</h3><div>The experiments adapt the Method Evaluation Model (MEM) to examine understandability variables (i.e., effectiveness and efficiency in using ADTs) and user acceptance variables (i.e., ease of use, usefulness, and intention to use). The MEM is also used to evaluate the relationships between these dimensions. In addition, a comparative analysis of the results of the two experiments is carried out.</div></div><div><h3>Results:</h3><div>With some minor differences, the outcomes of the two experiments are aligned. The results demonstrate that ADTs are well understood by participants, with values of understandability variables significantly above established thresholds. They are also highly appreciated, particularly for their ease of use. The results also show that users who are more effective in using the notation tend to evaluate it better in terms of usefulness.</div></div><div><h3>Conclusion:</h3><div>These studies provide empirical evidence supporting both the understandability and perceived acceptance of ADTs, thus encouraging further adoption of the notation in industrial contexts, and development of supporting tools.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107624"},"PeriodicalIF":3.8,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1016/j.infsof.2024.107611
Kai Petersen , Jan M. Gerken
Context:
The research volume is continuously increasing. Manual analysis of large topic scopes and continuously updating literature studies with the newest research results is effort intensive and, therefore, difficult to achieve.
Objective:
To discuss possibilities and next steps for using LLMs (e.g., GPT-4) in the mapping study process.
Method:
The research can be classified as a solution proposal. The solution was iteratively designed and discussed among the authors based on their experience with LLMs and literature reviews.
Results:
We propose strategies for the mapping process, outlining the use of agents and prompting strategies for each step.
Conclusion:
Given the potential of LLMs in literature studies, we should work on a holistic solutions for LLM-supported mapping studies.
{"title":"On the road to interactive LLM-based systematic mapping studies","authors":"Kai Petersen , Jan M. Gerken","doi":"10.1016/j.infsof.2024.107611","DOIUrl":"10.1016/j.infsof.2024.107611","url":null,"abstract":"<div><h3>Context:</h3><div>The research volume is continuously increasing. Manual analysis of large topic scopes and continuously updating literature studies with the newest research results is effort intensive and, therefore, difficult to achieve.</div></div><div><h3>Objective:</h3><div>To discuss possibilities and next steps for using LLMs (e.g., GPT-4) in the mapping study process.</div></div><div><h3>Method:</h3><div>The research can be classified as a solution proposal. The solution was iteratively designed and discussed among the authors based on their experience with LLMs and literature reviews.</div></div><div><h3>Results:</h3><div>We propose strategies for the mapping process, outlining the use of agents and prompting strategies for each step.</div></div><div><h3>Conclusion:</h3><div>Given the potential of LLMs in literature studies, we should work on a holistic solutions for LLM-supported mapping studies.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107611"},"PeriodicalIF":3.8,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.infsof.2024.107601
Yan Wang , Xintao Niu , Huayao Wu , Changhai Nie , Lei Yu , Xiaoyin Wang , Jiaxi Xu
Context:
The Incremental Covering Array (ICA) offers a flexible and efficient test schedule for Combinatorial Testing (CT) by enabling dynamic adjustment of test strength. Despite its importance, ICA generation has been under-explored in the CT community, resulting in limited and suboptimal existing approaches.
Objective:
To address this gap, we introduce a novel strategy, namely Top-down, for optimizing ICA generation.
Method:
In contrast to the traditional strategy, named Bottom-up, Top-down starts with a higher-strength test set and then extracts lower-strength sets from it, thus leveraging test case generation algorithms more effectively.
Results:
We conducted a comparative evaluation of the two strategies across 17 real-world software with 84 total versions. The results demonstrate that compared with Bottom-up, the Top-down strategy requires less time and generates smaller ICAs while covering more higher-strength interactions. Furthermore, Top-down outperforms Bottom-up in early fault detection and code line coverage, while also surpassing the random and direct CA generation strategies.
Conclusion:
The Top-down strategy not only improved the efficiency of test case generation but also enhanced the effectiveness of fault detection in the incremental testing scenarios.
{"title":"Top-down: A better strategy for incremental covering array generation","authors":"Yan Wang , Xintao Niu , Huayao Wu , Changhai Nie , Lei Yu , Xiaoyin Wang , Jiaxi Xu","doi":"10.1016/j.infsof.2024.107601","DOIUrl":"10.1016/j.infsof.2024.107601","url":null,"abstract":"<div><h3>Context:</h3><div>The Incremental Covering Array (ICA) offers a flexible and efficient test schedule for Combinatorial Testing (CT) by enabling dynamic adjustment of test strength. Despite its importance, ICA generation has been under-explored in the CT community, resulting in limited and suboptimal existing approaches.</div></div><div><h3>Objective:</h3><div>To address this gap, we introduce a novel strategy, namely <em>Top-down</em>, for optimizing ICA generation.</div></div><div><h3>Method:</h3><div>In contrast to the traditional strategy, named <em>Bottom-up</em>, <em>Top-down</em> starts with a higher-strength test set and then extracts lower-strength sets from it, thus leveraging test case generation algorithms more effectively.</div></div><div><h3>Results:</h3><div>We conducted a comparative evaluation of the two strategies across 17 real-world software with 84 total versions. The results demonstrate that compared with <em>Bottom-up</em>, the <em>Top-down</em> strategy requires less time and generates smaller ICAs while covering more higher-strength interactions. Furthermore, <em>Top-down</em> outperforms <em>Bottom-up</em> in early fault detection and code line coverage, while also surpassing the random and direct CA generation strategies.</div></div><div><h3>Conclusion:</h3><div>The <em>Top-down</em> strategy not only improved the efficiency of test case generation but also enhanced the effectiveness of fault detection in the incremental testing scenarios.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107601"},"PeriodicalIF":3.8,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1016/j.infsof.2024.107608
Luigi Quaranta , Kelly Azevedo , Fabio Calefato , Marcos Kalinowski
Context:
Rapid advancements in Artificial Intelligence (AI) and Machine Learning (ML) are revolutionizing software engineering in every application domain, driving unprecedented transformations and fostering innovation. However, despite these advances, several organizations are experiencing friction in the adoption of ML-based technologies, mainly due to the current shortage of ML professionals. In this context, Automated Machine Learning (AutoML) techniques have been presented as a promising solution to democratize ML adoption, even in the absence of specialized people.
Objective:
Our research aims to provide an overview of the evidence on the benefits and limitations of AutoML tools being adopted in industry.
Methods:
We conducted a Multivocal Literature Review, which allowed us to identify 54 sources from the academic literature and 108 sources from the grey literature reporting on AutoML benefits and limitations. We extracted explicitly reported benefits and limitations from the papers and applied the thematic analysis method for synthesis.
Results:
In general, we identified 18 reported benefits and 25 limitations. Concerning the benefits, we highlight that AutoML tools can help streamline the core steps of ML workflows, namely data preparation, feature engineering, model construction, and hyperparameter tuning—with concrete benefits on model performance, efficiency, and scalability. In addition, AutoML empowers both novice and experienced data scientists, promoting ML accessibility. However, we highlight several limitations that may represent obstacles to the widespread adoption of AutoML. For instance, AutoML tools may introduce barriers to transparency and interoperability, exhibit limited flexibility for complex scenarios, and offer inconsistent coverage of the ML workflow.
Conclusion:
The effectiveness of AutoML in facilitating the adoption of machine learning by users may vary depending on the specific tool and the context in which it is used. Today, AutoML tools are used to increase human expertise rather than replace it and, as such, require skilled users.
背景:人工智能(AI)和机器学习(ML)的快速发展正在彻底改变各个应用领域的软件工程,推动前所未有的变革并促进创新。然而,尽管取得了这些进步,一些组织在采用基于 ML 的技术时却遇到了摩擦,主要原因是目前缺乏 ML 专业人才。在这种情况下,自动机器学习(AutoML)技术被认为是即使在缺乏专业人员的情况下也能实现 ML 应用民主化的一种有前途的解决方案。我们从这些文献中提取了明确报告的优点和局限性,并采用专题分析法进行了综合。结果:总体而言,我们发现了 18 项报告的优点和 25 项局限性。关于益处,我们强调 AutoML 工具可以帮助简化 ML 工作流程的核心步骤,即数据准备、特征工程、模型构建和超参数调整,从而在模型性能、效率和可扩展性方面带来具体益处。此外,AutoML 还能帮助新手和有经验的数据科学家,提高 ML 的可及性。不过,我们也强调了一些可能阻碍 AutoML 广泛应用的局限性。例如,AutoML 工具可能会对透明度和互操作性造成障碍,在复杂场景下表现出有限的灵活性,并且对 ML 工作流的覆盖范围不一致。如今,AutoML 工具被用来提高人类的专业知识,而不是取而代之,因此需要熟练的用户。
{"title":"A multivocal literature review on the benefits and limitations of industry-leading AutoML tools","authors":"Luigi Quaranta , Kelly Azevedo , Fabio Calefato , Marcos Kalinowski","doi":"10.1016/j.infsof.2024.107608","DOIUrl":"10.1016/j.infsof.2024.107608","url":null,"abstract":"<div><h3>Context:</h3><div>Rapid advancements in Artificial Intelligence (AI) and Machine Learning (ML) are revolutionizing software engineering in every application domain, driving unprecedented transformations and fostering innovation. However, despite these advances, several organizations are experiencing friction in the adoption of ML-based technologies, mainly due to the current shortage of ML professionals. In this context, Automated Machine Learning (AutoML) techniques have been presented as a promising solution to democratize ML adoption, even in the absence of specialized people.</div></div><div><h3>Objective:</h3><div>Our research aims to provide an overview of the evidence on the benefits and limitations of AutoML tools being adopted in industry.</div></div><div><h3>Methods:</h3><div>We conducted a Multivocal Literature Review, which allowed us to identify 54 sources from the academic literature and 108 sources from the grey literature reporting on AutoML benefits and limitations. We extracted explicitly reported benefits and limitations from the papers and applied the thematic analysis method for synthesis.</div></div><div><h3>Results:</h3><div>In general, we identified 18 reported benefits and 25 limitations. Concerning the benefits, we highlight that AutoML tools can help streamline the core steps of ML workflows, namely data preparation, feature engineering, model construction, and hyperparameter tuning—with concrete benefits on model performance, efficiency, and scalability. In addition, AutoML empowers both novice and experienced data scientists, promoting ML accessibility. However, we highlight several limitations that may represent obstacles to the widespread adoption of AutoML. For instance, AutoML tools may introduce barriers to transparency and interoperability, exhibit limited flexibility for complex scenarios, and offer inconsistent coverage of the ML workflow.</div></div><div><h3>Conclusion:</h3><div>The effectiveness of AutoML in facilitating the adoption of machine learning by users may vary depending on the specific tool and the context in which it is used. Today, AutoML tools are used to increase human expertise rather than replace it and, as such, require skilled users.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107608"},"PeriodicalIF":3.8,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The last several years saw the emergence of AI assistants for code — multi-purpose AI-based helpers in software engineering. As they become omnipresent in all aspects of software development, it becomes critical to understand their usage patterns.
Objective:
We aim to better understand how specifically developers are using AI assistants, why they are not using them in certain parts of their development workflow, and what needs to be improved in the future.
Methods:
In this work, we carried out a large-scale survey aimed at how AI assistants are used, focusing on specific software development activities and stages. We collected opinions of 481 programmers on five broad activities: (a) implementing new features, (b) writing tests, (c) bug triaging, (d) refactoring, and (e) writing natural-language artifacts, as well as their individual stages.
Results:
Our results provide a novel comparison of different stages where AI assistants are used that is both comprehensive and detailed. It highlights specific activities that developers find less enjoyable and want to delegate to an AI assistant, e.g., writing tests and natural-language artifacts. We also determine more granular stages where AI assistants are used, such as generating tests and generating docstrings, as well as less studied parts of the workflow, such as generating test data. Among the reasons for not using assistants, there are general aspects like trust and company policies, as well as more concrete issues like the lack of project-size context, which can be the focus of the future research.
Conclusion:
The provided analysis highlights stages of software development that developers want to delegate and that are already popular for using AI assistants, which can be a good focus for features aimed to help developers right now. The main reasons for not using AI assistants can serve as a guideline for future work.
{"title":"Using AI-based coding assistants in practice: State of affairs, perceptions, and ways forward","authors":"Agnia Sergeyuk , Yaroslav Golubev , Timofey Bryksin , Iftekhar Ahmed","doi":"10.1016/j.infsof.2024.107610","DOIUrl":"10.1016/j.infsof.2024.107610","url":null,"abstract":"<div><h3>Context:</h3><div>The last several years saw the emergence of <em>AI assistants</em> for code — multi-purpose AI-based helpers in software engineering. As they become omnipresent in all aspects of software development, it becomes critical to understand their usage patterns.</div></div><div><h3>Objective:</h3><div>We aim to better understand <em>how specifically</em> developers are using AI assistants, why they are <em>not</em> using them in certain parts of their development workflow, and what needs to be improved in the future.</div></div><div><h3>Methods:</h3><div>In this work, we carried out a large-scale survey aimed at how AI assistants are used, focusing on specific software development activities and stages. We collected opinions of 481 programmers on five broad activities: (a) implementing new features, (b) writing tests, (c) bug triaging, (d) refactoring, and (e) writing natural-language artifacts, as well as their individual stages.</div></div><div><h3>Results:</h3><div>Our results provide a novel comparison of different stages where AI assistants are used that is both comprehensive and detailed. It highlights specific activities that developers find less enjoyable and want to delegate to an AI assistant, <em>e.g.</em>, writing tests and natural-language artifacts. We also determine more granular stages where AI assistants are used, such as generating tests and generating docstrings, as well as less studied parts of the workflow, such as generating test data. Among the reasons for not using assistants, there are general aspects like trust and company policies, as well as more concrete issues like the lack of project-size context, which can be the focus of the future research.</div></div><div><h3>Conclusion:</h3><div>The provided analysis highlights stages of software development that developers want to delegate and that are already popular for using AI assistants, which can be a good focus for features aimed to help developers right now. The main reasons for not using AI assistants can serve as a guideline for future work.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107610"},"PeriodicalIF":3.8,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142594043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.infsof.2024.107606
Hongwei Tao , Han Liu , Xiaoxu Niu , Licheng Ding , Yixiang Chen , Qiaoling Cao
Context:
With the rapid development of software, various software accidents emerge one after another. The catastrophic consequences caused by these accidents make people realize the importance of software trustworthiness. As an indispensable means to ensure software quality, traditional trustworthiness measurement evaluates the software trustworthiness by studying the trustworthy attributes in a static way. However, most of the factors considered in trustworthy attributes tend to be dynamic with time. The current research often ignores the changes in software after running for some time, and cannot reflect the changes in software trustworthiness at different running times.
Objective:
Our objective in this paper is to study the relationship between running time and software trustworthiness, and design a running time-related software trustworthiness measurement model from the untrustworthy evidence related to software aging.
Method:
We first extract the untrustworthy evidence from the bugs related to software aging in 5 subsystems of 4 public defect databases and 18 well-known software accidents, establish a risk level model, and design metric elements of untrustworthy evidence based on software aging. Then we construct a software aging cause category trustworthiness measurement model based on Boltzmann entropy. Finally, we build a software trustworthiness measurement model based on weighted Boltzmann entropy. For the weight values used in the model, the Brassard Priority Synthesis Analysis method was used to determine them.
Result:
Different from the common resource consumption parameter and performance parameter, a model based on weighted Boltzmann entropy can describe the influence of various parameters on the software’s trustworthiness through risk state. It can reflect the change of system state and describe the system state completely.
Conclusion:
The empirical study shows the effectiveness and practicality of our method for evaluating software dynamic trustworthiness. Meanwhile, it also indicates a promising avenue for future research and application in the field of software trustworthiness measurement.
{"title":"Software aging oriented trustworthiness measurement based on weighted Boltzmann entropy","authors":"Hongwei Tao , Han Liu , Xiaoxu Niu , Licheng Ding , Yixiang Chen , Qiaoling Cao","doi":"10.1016/j.infsof.2024.107606","DOIUrl":"10.1016/j.infsof.2024.107606","url":null,"abstract":"<div><h3>Context:</h3><div>With the rapid development of software, various software accidents emerge one after another. The catastrophic consequences caused by these accidents make people realize the importance of software trustworthiness. As an indispensable means to ensure software quality, traditional trustworthiness measurement evaluates the software trustworthiness by studying the trustworthy attributes in a static way. However, most of the factors considered in trustworthy attributes tend to be dynamic with time. The current research often ignores the changes in software after running for some time, and cannot reflect the changes in software trustworthiness at different running times.</div></div><div><h3>Objective:</h3><div>Our objective in this paper is to study the relationship between running time and software trustworthiness, and design a running time-related software trustworthiness measurement model from the untrustworthy evidence related to software aging.</div></div><div><h3>Method:</h3><div>We first extract the untrustworthy evidence from the bugs related to software aging in 5 subsystems of 4 public defect databases and 18 well-known software accidents, establish a risk level model, and design metric elements of untrustworthy evidence based on software aging. Then we construct a software aging cause category trustworthiness measurement model based on Boltzmann entropy. Finally, we build a software trustworthiness measurement model based on weighted Boltzmann entropy. For the weight values used in the model, the Brassard Priority Synthesis Analysis method was used to determine them.</div></div><div><h3>Result:</h3><div>Different from the common resource consumption parameter and performance parameter, a model based on weighted Boltzmann entropy can describe the influence of various parameters on the software’s trustworthiness through risk state. It can reflect the change of system state and describe the system state completely.</div></div><div><h3>Conclusion:</h3><div>The empirical study shows the effectiveness and practicality of our method for evaluating software dynamic trustworthiness. Meanwhile, it also indicates a promising avenue for future research and application in the field of software trustworthiness measurement.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107606"},"PeriodicalIF":3.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.infsof.2024.107612
Kati Saarni , Marjo Kauppinen , Tomi Männistö
Context
Companies are interested in building successful value-producing ecosystems together to offer end users a broader digital service offering and better meet customer needs. However, most ecosystems fail in the early years.
Objective
We investigated one small software ecosystem from the planning phase to the operative phase, where the participating companies left one by one because the software ecosystem was unsuccessful, and the software ecosystem ended after four operative years. The software ecosystem provided a digital service offering based on the defined MVP (Minimum Viable Product). That is why we were interested in understanding the MVP's impact on the ecosystem's failure.
Method
We conducted a case study, the results of which are based on the semi-structured interviews of eight representatives of the software ecosystem.
Results
This study showed that the actors prioritized out functionalities from the MVP, and the MVP was no longer based on the defined value proposition, target customer groups, and customer paths. It was then difficult for the actors to achieve their objectives. The companies’ commitment depended on the set objectives, and when the objectives were not achieved, the actors left the ecosystem, and the software ecosystem failed.
Conclusion
The results show that the MVP can significantly affect the failure of the small software ecosystem, where all actors have a keystone role. The MVP largely defines what kind of digital service offering the software ecosystem provides and whether the actors can achieve the objectives, especially their sales goals. Thus, prioritizing the functionalities of the MVP is a critical activity.
{"title":"Impact of minimum viable product on software ecosystem failure","authors":"Kati Saarni , Marjo Kauppinen , Tomi Männistö","doi":"10.1016/j.infsof.2024.107612","DOIUrl":"10.1016/j.infsof.2024.107612","url":null,"abstract":"<div><h3>Context</h3><div>Companies are interested in building successful value-producing ecosystems together to offer end users a broader digital service offering and better meet customer needs. However, most ecosystems fail in the early years.</div></div><div><h3>Objective</h3><div>We investigated one small software ecosystem from the planning phase to the operative phase, where the participating companies left one by one because the software ecosystem was unsuccessful, and the software ecosystem ended after four operative years. The software ecosystem provided a digital service offering based on the defined MVP (Minimum Viable Product). That is why we were interested in understanding the MVP's impact on the ecosystem's failure.</div></div><div><h3>Method</h3><div>We conducted a case study, the results of which are based on the semi-structured interviews of eight representatives of the software ecosystem.</div></div><div><h3>Results</h3><div>This study showed that the actors prioritized out functionalities from the MVP, and the MVP was no longer based on the defined value proposition, target customer groups, and customer paths. It was then difficult for the actors to achieve their objectives. The companies’ commitment depended on the set objectives, and when the objectives were not achieved, the actors left the ecosystem, and the software ecosystem failed.</div></div><div><h3>Conclusion</h3><div>The results show that the MVP can significantly affect the failure of the small software ecosystem, where all actors have a keystone role. The MVP largely defines what kind of digital service offering the software ecosystem provides and whether the actors can achieve the objectives, especially their sales goals. Thus, prioritizing the functionalities of the MVP is a critical activity.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107612"},"PeriodicalIF":3.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.1016/j.infsof.2024.107607
Anjana M.S. , Patricia Lago , Aryadevi Remanidevi Devidas , Maneesha Vinodini Ramesh
Context:
India’s coal use for electricity jumped 13% in 2021–22. Energy management systems (EnMS) are seen as a solution, but only sustainable EnMS can have a discernable impact on the carbon footprint and the Return On Investment (ROI).
Objective:
Designing a software-intensive sustainable energy management system requires considering technical, environmental, social, and economic factors. This helps evaluate an EnMS’s overall impact and improve its design. We proposed EnSAF for efficient utilization of the energy incurred for the design of sustainability-aware EnMSs.
Method:
In this work, EnMSs in diverse use cases were selected and analyzed in terms of technical, social, environmental, and economic dimensions of sustainability in collaboration with various stakeholders. The set of application-specific design concerns and Quality Attributes (QAs) were addressed by the Sustainability Assessment Framework (SAF) toolkit. The resultant SAF instances of each EnMS, derived through the analysis and discussion with the stakeholders, were then analyzed to advocate the DMs and SQ model for generic EnMSs.
Results:
This study demonstrated the following outcomes (i) technical concerns dominate the existing EnMSs (ii) integration of renewable energy resources reduces dependency to the main power grid and nurtures a sustainable environment by diminishing carbon footprint, and minimizing payback time, in the economic dimension; (iii) extant definitions of quality attributes need significant scrutiny and updates apropos of objectives of EnMSs
Conclusion:
The SAF toolkit was found to be deficient in the representation of relevant design concerns and quality attributes concomitant with sustainable EnMS. Prevailing DMs are inept to factor in stakeholder’s concerns, as the model is ill-equipped to account for spatio-temporal representation of QAs. Pursuant to the insights from the 4 SAF instances, a generic framework, EnSAF, is proposed to tackle the relevant concerns apropos of EnMS sustainability. This work proposed a representation of DMs in the SAF toolkit specifically for sustainability-aware EnMS.
{"title":"Energize sustainability: EnSAF for sustainability aware, software intensive energy management systems","authors":"Anjana M.S. , Patricia Lago , Aryadevi Remanidevi Devidas , Maneesha Vinodini Ramesh","doi":"10.1016/j.infsof.2024.107607","DOIUrl":"10.1016/j.infsof.2024.107607","url":null,"abstract":"<div><h3>Context:</h3><div>India’s coal use for electricity jumped 13% in 2021–22. Energy management systems (EnMS) are seen as a solution, but only sustainable EnMS can have a discernable impact on the carbon footprint and the Return On Investment (ROI).</div></div><div><h3>Objective:</h3><div>Designing a software-intensive sustainable energy management system requires considering technical, environmental, social, and economic factors. This helps evaluate an EnMS’s overall impact and improve its design. We proposed EnSAF for efficient utilization of the energy incurred for the design of sustainability-aware EnMSs.</div></div><div><h3>Method:</h3><div>In this work, EnMSs in diverse use cases were selected and analyzed in terms of technical, social, environmental, and economic dimensions of sustainability in collaboration with various stakeholders. The set of application-specific design concerns and Quality Attributes (QAs) were addressed by the Sustainability Assessment Framework (SAF) toolkit. The resultant SAF instances of each EnMS, derived through the analysis and discussion with the stakeholders, were then analyzed to advocate the DMs and SQ model for generic EnMSs.</div></div><div><h3>Results:</h3><div>This study demonstrated the following outcomes (i) technical concerns dominate the existing EnMSs (ii) integration of renewable energy resources reduces dependency to the main power grid and nurtures a sustainable environment by diminishing carbon footprint, and minimizing payback time, in the economic dimension; (iii) extant definitions of quality attributes need significant scrutiny and updates apropos of objectives of EnMSs</div></div><div><h3>Conclusion:</h3><div>The SAF toolkit was found to be deficient in the representation of relevant design concerns and quality attributes concomitant with sustainable EnMS. Prevailing DMs are inept to factor in stakeholder’s concerns, as the model is ill-equipped to account for spatio-temporal representation of QAs. Pursuant to the insights from the 4 SAF instances, a generic framework, EnSAF, is proposed to tackle the relevant concerns apropos of EnMS sustainability. This work proposed a representation of DMs in the SAF toolkit specifically for sustainability-aware EnMS.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107607"},"PeriodicalIF":3.8,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142561364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today’s software development landscape, the DevSecOps approach has gained traction due to its focus on the software development process and bolstering security measures in projects, a task in light of the ever-evolving cybersecurity threats.
Objective:
This study aims to address the lack of metrics for quantitatively assessing its efficacy from both security and business logic perspectives.
Methods:
To tackle this issue, the research introduces the Framework of Business Index Concerning Security (FOBICS), a set of metrics designed to enable transparent evaluations of project security. FOBICS considers various perspectives relevant to DevSecOps practices. It includes factors such as project duration and financial outcomes, making it appealing for implementation in business settings.
Results:
The effectiveness of FOBICS is validated theoretically and empirically via its application in two real-world projects: the results from these implementations show a correlation between FOBICS metrics and the security strategies employed as the development methodologies adopted by diverse teams throughout the projects.
Conclusion:
Hence, FOBICS emerges as a tool for assessing and continuously monitoring project security, offering insights into areas of strength and areas that may require enhancement. FOBICS is shown to be effective in assessing the level of DevSecOps implementation. The ease of calculating FOBICS metrics makes them easily interpretable and continuously verifiable. Moreover, FOBICS summarizes most of the other quantitative and qualitative metrics in the literature.
{"title":"FOBICS: Assessing project security level through a metrics framework that evaluates DevSecOps performance","authors":"Alessandro Caniglia, Vincenzo Dentamaro, Stefano Galantucci, Donato Impedovo","doi":"10.1016/j.infsof.2024.107605","DOIUrl":"10.1016/j.infsof.2024.107605","url":null,"abstract":"<div><h3>Context:</h3><div>In today’s software development landscape, the DevSecOps approach has gained traction due to its focus on the software development process and bolstering security measures in projects, a task in light of the ever-evolving cybersecurity threats.</div></div><div><h3>Objective:</h3><div>This study aims to address the lack of metrics for quantitatively assessing its efficacy from both security and business logic perspectives.</div></div><div><h3>Methods:</h3><div>To tackle this issue, the research introduces the Framework of Business Index Concerning Security (FOBICS), a set of metrics designed to enable transparent evaluations of project security. FOBICS considers various perspectives relevant to DevSecOps practices. It includes factors such as project duration and financial outcomes, making it appealing for implementation in business settings.</div></div><div><h3>Results:</h3><div>The effectiveness of FOBICS is validated theoretically and empirically via its application in two real-world projects: the results from these implementations show a correlation between FOBICS metrics and the security strategies employed as the development methodologies adopted by diverse teams throughout the projects.</div></div><div><h3>Conclusion:</h3><div>Hence, FOBICS emerges as a tool for assessing and continuously monitoring project security, offering insights into areas of strength and areas that may require enhancement. FOBICS is shown to be effective in assessing the level of DevSecOps implementation. The ease of calculating FOBICS metrics makes them easily interpretable and continuously verifiable. Moreover, FOBICS summarizes most of the other quantitative and qualitative metrics in the literature.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"178 ","pages":"Article 107605"},"PeriodicalIF":3.8,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}