Selina Demi, Mary Sánchez-Gordón, Monica Kristiansen, Xabier Larrucea
Blockchain technology has attracted significant attention in both academia and industry. Recently, the application of blockchain has been advocated in software engineering. The global software engineering paradigm exacerbates trust issues, as distributed and cross-organizational teams need to share software artifacts. In such a context, there is a need for a decentralized yet reliable traceability knowledge base to keep track of what/how/when/by whom software artifacts were created or changed. This study presents a blockchain-enabled framework for trustworthy and collaborative traceability management and identifies benefits, challenges, and potential improvements based on the feedback of software engineering experts. A qualitative approach was followed in this study through semistructured interviews with software engineering (SE) experts. Transcripts were analyzed by applying the content analysis technique. The results indicated the emergence of five categories, further grouped into three main categories: experts' perceptions, blockchain-based software process improvement, and experts' recommendations. In addition, the findings suggested four archetypes of organizations that may be interested in blockchain technology: distributed organizations, organizations with contract-based projects, organizations in regulated domains, and regulators who may push the use of this technology. Further efforts should be devoted to the integration of the proposal with tools used throughout the software development lifecycle and leveraging the potential of smart contracts in validating the implementation of requirements automatically.
{"title":"Trustworthy and collaborative traceability management: Experts’ feedback on a blockchain-enabled framework","authors":"Selina Demi, Mary Sánchez-Gordón, Monica Kristiansen, Xabier Larrucea","doi":"10.1002/smr.2707","DOIUrl":"10.1002/smr.2707","url":null,"abstract":"<p>Blockchain technology has attracted significant attention in both academia and industry. Recently, the application of blockchain has been advocated in software engineering. The global software engineering paradigm exacerbates trust issues, as distributed and cross-organizational teams need to share software artifacts. In such a context, there is a need for a decentralized yet reliable traceability knowledge base to keep track of what/how/when/by whom software artifacts were created or changed. This study presents a blockchain-enabled framework for trustworthy and collaborative traceability management and identifies benefits, challenges, and potential improvements based on the feedback of software engineering experts. A qualitative approach was followed in this study through semistructured interviews with software engineering (SE) experts. Transcripts were analyzed by applying the content analysis technique. The results indicated the emergence of five categories, further grouped into three main categories: experts' perceptions, blockchain-based software process improvement, and experts' recommendations. In addition, the findings suggested four archetypes of organizations that may be interested in blockchain technology: distributed organizations, organizations with contract-based projects, organizations in regulated domains, and regulators who may push the use of this technology. Further efforts should be devoted to the integration of the proposal with tools used throughout the software development lifecycle and leveraging the potential of smart contracts in validating the implementation of requirements automatically.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.2707","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141514087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agile methods have emerged to overcome the obstacles of structured methodologies, such as the waterfall, prototype, spiral, and so on. There are studies showing the usefulness of agile approaches in software development. However, studies on Agile maintenance are very limited in number. Regardless of the chosen methodology, software maintenance can be carried out in either a local (on-the-premise) or global (distributed) environment. In a local environment, the software maintenance team is co-located on the same premises, while in a global environment, the team is geographically dispersed from the customer. The main objective of this Systematic Mapping (SM) study is to identify the practices useful for software maintenance using the Agile approaches in the Cloud environment. We have conducted a comprehensive search in well-known digital databases and examined the articles that map to the pre-defined inclusion criteria. The study selected and analyzed 48 articles out of 320 published between 2000 and 2022. The findings of the mapping study reveal that Agile can resolve the major issues faced in traditional software maintenance, making the role of this approach significant in global/distributed software maintenance. Cloud computing plays a vital role in software maintenance. Most of the studies highlight the application of XP- and Scrum-based Agile maintenance models. The study found a need for more Agile maintenance solutions in the cloud, highlighting the importance of agile in software maintenance, both locally and globally. Irrespective of the environment, Cloud computing provides a centralized platform for collaboration and communication, while also offering scalability and flexibility to adapt to diverse infrastructure needs. This allows agile maintenance practices to be implemented across both local and global environments, leveraging the cloud's capabilities to overcome geographical and infrastructural challenges.
{"title":"Software maintenance practices using agile methods towards cloud environment: A systematic mapping","authors":"Mohammed Almashhadani, Alok Mishra, Ali Yazici","doi":"10.1002/smr.2698","DOIUrl":"10.1002/smr.2698","url":null,"abstract":"<p>Agile methods have emerged to overcome the obstacles of structured methodologies, such as the waterfall, prototype, spiral, and so on. There are studies showing the usefulness of agile approaches in software development. However, studies on Agile maintenance are very limited in number. Regardless of the chosen methodology, software maintenance can be carried out in either a local (on-the-premise) or global (distributed) environment. In a local environment, the software maintenance team is co-located on the same premises, while in a global environment, the team is geographically dispersed from the customer. The main objective of this Systematic Mapping (SM) study is to identify the practices useful for software maintenance using the Agile approaches in the Cloud environment. We have conducted a comprehensive search in well-known digital databases and examined the articles that map to the pre-defined inclusion criteria. The study selected and analyzed 48 articles out of 320 published between 2000 and 2022. The findings of the mapping study reveal that Agile can resolve the major issues faced in traditional software maintenance, making the role of this approach significant in global/distributed software maintenance. Cloud computing plays a vital role in software maintenance. Most of the studies highlight the application of XP- and Scrum-based Agile maintenance models. The study found a need for more Agile maintenance solutions in the cloud, highlighting the importance of agile in software maintenance, both locally and globally. Irrespective of the environment, Cloud computing provides a centralized platform for collaboration and communication, while also offering scalability and flexibility to adapt to diverse infrastructure needs. This allows agile maintenance practices to be implemented across both local and global environments, leveraging the cloud's capabilities to overcome geographical and infrastructural challenges.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.2698","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141514086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microservice architecture (MSA) is a mainstream architectural style due to its high maintainability and scalability. In practice, an appropriate microservice-oriented decomposition is the foundation to make a system enjoy the benefits of MSA. In terms of decomposing monolithic systems into microservices, researchers have been exploring many optimization objectives, of which modularity is a predominantly focused quality attribute. Security is also a critical quality attribute, that measures the extent to which a system protects data from malicious access or use by attackers. Considering security in microservices-oriented decomposition can help avoid the risk of leaking critical data and other unexpected software security issues. However, few researchers consider the security objective during microservice-oriented decomposition, because the measurement of security and the trade-off with other objectives are challenging in reality. To bridge this research gap, we propose a security-optimized approach for microservice-oriented decomposition (So4MoD). In this approach, we adapt five metrics from previous studies for the measurement of the data security of candidate microservices. A multi-objective optimization algorithm based on NSGA-II is designed to search for microservices with optimized security and modularity. To validate the effectiveness of the proposed So4MoD, we perform several experiments on eight open-source projects and compare the decomposition results to other three state-of-the-art approaches, that is, FoSCI, CO-GCN, and MSExtractor. The experiment results show that our approach can achieve at least an 11.5% improvement in terms of security metrics. Moreover, the decomposition results of So4MoD outperform other approaches in four modularity metrics, demonstrating that So4MoD can optimize data security while pursuing a well-modularized MSA.
{"title":"Towards a security-optimized approach for the microservice-oriented decomposition","authors":"Xiaodong Liu, Zhikun Chen, Yu Qian, Chenxing Zhong, Huang Huang, Shanshan Li, Dong Shao","doi":"10.1002/smr.2670","DOIUrl":"10.1002/smr.2670","url":null,"abstract":"<p>Microservice architecture (MSA) is a mainstream architectural style due to its high maintainability and scalability. In practice, an appropriate microservice-oriented decomposition is the foundation to make a system enjoy the benefits of MSA. In terms of decomposing monolithic systems into microservices, researchers have been exploring many optimization objectives, of which modularity is a predominantly focused quality attribute. Security is also a critical quality attribute, that measures the extent to which a system protects data from malicious access or use by attackers. Considering security in microservices-oriented decomposition can help avoid the risk of leaking critical data and other unexpected software security issues. However, few researchers consider the security objective during microservice-oriented decomposition, because the measurement of security and the trade-off with other objectives are challenging in reality. To bridge this research gap, we propose a security-optimized approach for microservice-oriented decomposition (So4MoD). In this approach, we adapt five metrics from previous studies for the measurement of the data security of candidate microservices. A multi-objective optimization algorithm based on NSGA-II is designed to search for microservices with optimized security and modularity. To validate the effectiveness of the proposed So4MoD, we perform several experiments on eight open-source projects and compare the decomposition results to other three state-of-the-art approaches, that is, FoSCI, CO-GCN, and MSExtractor. The experiment results show that our approach can achieve at least an 11.5% improvement in terms of security metrics. Moreover, the decomposition results of So4MoD outperform other approaches in four modularity metrics, demonstrating that So4MoD can optimize data security while pursuing a well-modularized MSA.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 10","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yifan Wu, Chendong Lin, An Liu, Lei Zhao, Xiaofang Zhang
In the process of crowdsourced testing, massive bug reports are submitted. Among them, the severity level of the bug report is an important indicator for traigers of crowdsourced platforms to arrange the order of reports efficiently so that developers can prioritize high-severity defects. A lot of work has been devoted to the study of automatically assigning severity levels to a large number of bug reports in crowdsourcing test systems. The research objects of these works are standard bug reports, focusing on the text part of the report, using various feature engineering methods and classification techniques. However, while achieving good performance, these methods still need to overcome two challenges: no consideration of image information in mobile testing and discontinuous semantic information of words in bug reports. In this paper, we propose a new method of severity prediction by using heterogeneous graph convolutional networks with screenshots (SPHGCN-S), which combines text features and screenshots information to understand the report more comprehensively. In addition, our approach applies the heterogeneous graph convolutional network (HGCN) architecture, which can capture the global word information to alleviate the semantic problem of word discontinuity and underlying relations between reports. We conduct a comprehensive study to compare seven commonly adopted bug report severity prediction methods with our approach. The experimental results show that our approach SPHGCN-S can improve severity prediction performance and effectively predict reports with high severity.
{"title":"Crowdsourced bug report severity prediction based on text and image understanding via heterogeneous graph convolutional networks","authors":"Yifan Wu, Chendong Lin, An Liu, Lei Zhao, Xiaofang Zhang","doi":"10.1002/smr.2705","DOIUrl":"10.1002/smr.2705","url":null,"abstract":"<p>In the process of crowdsourced testing, massive bug reports are submitted. Among them, the severity level of the bug report is an important indicator for traigers of crowdsourced platforms to arrange the order of reports efficiently so that developers can prioritize high-severity defects. A lot of work has been devoted to the study of automatically assigning severity levels to a large number of bug reports in crowdsourcing test systems. The research objects of these works are standard bug reports, focusing on the text part of the report, using various feature engineering methods and classification techniques. However, while achieving good performance, these methods still need to overcome two challenges: no consideration of image information in mobile testing and discontinuous semantic information of words in bug reports. In this paper, we propose a new method of severity prediction by using heterogeneous graph convolutional networks with screenshots (SPHGCN-S), which combines text features and screenshots information to understand the report more comprehensively. In addition, our approach applies the heterogeneous graph convolutional network (HGCN) architecture, which can capture the global word information to alleviate the semantic problem of word discontinuity and underlying relations between reports. We conduct a comprehensive study to compare seven commonly adopted bug report severity prediction methods with our approach. The experimental results show that our approach SPHGCN-S can improve severity prediction performance and effectively predict reports with high severity.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 11","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The minimizing Delta Debugging (DDMIN) was among the first algorithms designed to automate the task of reducing test cases. Its popularity is based on the characteristics that it works on any kind of input, without knowledge about the input structure. Several studies proved that smaller outputs can be produced faster with more advanced techniques (e.g., building a tree representation of the input and reducing that data structure); however, if the structure is unknown or changing frequently, maintaining the descriptors might not be resource-efficient. Therefore, in this paper, we focus on the evaluation of the novel fixed-point iteration of minimizing Delta Debugging (DDMIN*) on publicly available test suites related to software engineering. Our experiments show that DDMIN* can help reduce inputs further by 48.08% on average compared to DDMIN (using lines as the units of the reduction). Although the effectiveness of the algorithm improved, it comes with the cost of additional testing steps. This study shows how the characteristics of the input affect the results and when it pays off using DDMIN*.
{"title":"Evaluation of the fixed-point iteration of minimizing delta debugging","authors":"Dániel Vince, Ákos Kiss","doi":"10.1002/smr.2702","DOIUrl":"10.1002/smr.2702","url":null,"abstract":"<p>The minimizing Delta Debugging (DDMIN) was among the first algorithms designed to automate the task of reducing test cases. Its popularity is based on the characteristics that it works on any kind of input, without knowledge about the input structure. Several studies proved that smaller outputs can be produced faster with more advanced techniques (e.g., building a tree representation of the input and reducing that data structure); however, if the structure is unknown or changing frequently, maintaining the descriptors might not be resource-efficient. Therefore, in this paper, we focus on the evaluation of the novel fixed-point iteration of minimizing Delta Debugging (DDMIN*) on publicly available test suites related to software engineering. Our experiments show that DDMIN* can help reduce inputs further by 48.08% on average compared to DDMIN (using lines as the units of the reduction). Although the effectiveness of the algorithm improved, it comes with the cost of additional testing steps. This study shows how the characteristics of the input affect the results and when it pays off using DDMIN*.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 10","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fanyi Meng, Hai Yu, Chun Yong Chong, Ying Wang, Zhiliang Zhu
The undocumented evolution of a software project and its underlying architecture underscores the need to recover the architecture from the software's implementation-level artifacts. Despite the existence of various software remodularization techniques, they often suffer from inaccuracies, and evaluating their effectiveness is challenging due to the absence of accurate “ground-truth” architectures or reference models. Prior studies on reference model construction are time-consuming and labor-intensive as it heavily relies on manual analysis by domain experts. Besides, other existing approaches that directly utilize the directory or package structure of the latest version can be unreliable, lacking in-depth analysis of the employed software structure. To address the above limitations, in this paper, we propose Automated Construction of Reference Model (ACRM), an approach for automatically constructing reference models by assigning weights to classes for various software projects using the metadata of all software versions and historical maintenance records. We evaluate ACRM through both quantitative and qualitative analyses. The experiment results provide quantitative validation and show that the generated reference models are reasonable, as confirmed by the relationship between proposed reference models and architectural smells or bugs. Furthermore, we conduct a survey among the practitioners from industry, to gain insights from practitioners' practices and further validate the generated reference models. The survey shows that, on average, 87% of the participants agree with the reference models generated by ACRM. Moreover, we propose an improved metric, wc2c, which analyzes the strengths and weaknesses of different types of software clustering techniques using the proposed reference models of the given software. Finally, we discuss the potential benefits of using ACRM in analyzed projects, particularly in terms of improving software quality, reducing maintenance costs, and enhancing developer productivity.
{"title":"Automated construction of reference model for software remodularization through software evolution","authors":"Fanyi Meng, Hai Yu, Chun Yong Chong, Ying Wang, Zhiliang Zhu","doi":"10.1002/smr.2700","DOIUrl":"https://doi.org/10.1002/smr.2700","url":null,"abstract":"<p>The undocumented evolution of a software project and its underlying architecture underscores the need to recover the architecture from the software's implementation-level artifacts. Despite the existence of various software remodularization techniques, they often suffer from inaccuracies, and evaluating their effectiveness is challenging due to the absence of accurate “ground-truth” architectures or reference models. Prior studies on reference model construction are time-consuming and labor-intensive as it heavily relies on manual analysis by domain experts. Besides, other existing approaches that directly utilize the directory or package structure of the latest version can be unreliable, lacking in-depth analysis of the employed software structure. To address the above limitations, in this paper, we propose <b><span>A</span></b>utomated <b><span>C</span></b>onstruction of <b><span>R</span></b>eference <b><span>M</span></b>odel (ACRM), an approach for automatically constructing reference models by assigning weights to classes for various software projects using the metadata of all software versions and historical maintenance records. We evaluate ACRM through both quantitative and qualitative analyses. The experiment results provide quantitative validation and show that the generated reference models are reasonable, as confirmed by the relationship between proposed reference models and architectural smells or bugs. Furthermore, we conduct a survey among the practitioners from industry, to gain insights from practitioners' practices and further validate the generated reference models. The survey shows that, on average, 87% of the participants agree with the reference models generated by ACRM. Moreover, we propose an improved metric, <i>wc2c</i>, which analyzes the strengths and weaknesses of different types of software clustering techniques using the proposed reference models of the given software. Finally, we discuss the potential benefits of using ACRM in analyzed projects, particularly in terms of improving software quality, reducing maintenance costs, and enhancing developer productivity.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 10","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}