Hao Ren, Yanhui Li, Lin Chen, Yulu Cao, Xiaowei Zhang, Changhai Nie
Issue tracking systems are now prevalent in software development, which would help developers submit and discuss issues to solve development problems on software projects. Most previous studies have been conducted to analyze issue relations within projects, such as recommending similar or duplicate bug issues. However, along with the popularization of co-developing through multiple projects, many issues are cross-project correlated (CPC), that is, one issue is associated with another issue in a different project. When developers meet with CPC issues, it may primarily increase the difficulties of solving them because they need information from not only their projects but also other related projects that developers are not familiar with. Identifying a CPC issue as early as possible is a fundamental challenge for both managers and developers to allocate the resources for software maintenance and estimate the effort to solve it. This paper proposes 11 issue metrics of two groups to describe textual summary and reporters' activity, which can be extracted just after the issue was reported. We employ these 11 issue metrics to construct just-in-time (JIT) prediction models to identify CPC issues. To evaluate the effect of CPC issue prediction models, we conduct experiments on 16 open-source data science and deep learning projects and compare our prediction model with two baseline models based on textual features (i.e., Term Frequency-Inverse Document Frequency [TF-IDF] and Word Embedding), which are commonly adopted by previous studies on issue prediction. The results show that the JIT prediction model based on issue metrics has significantly improved the performance of CPC issue prediction under two evaluation indicators, Matthew's correlation coefficient (MCC) and F1. In addition, we find that the prediction model is more suitable for large-scale complex core projects in the open-source ecosystem.
{"title":"Just-in-time identification for cross-project correlated issues","authors":"Hao Ren, Yanhui Li, Lin Chen, Yulu Cao, Xiaowei Zhang, Changhai Nie","doi":"10.1002/smr.2637","DOIUrl":"10.1002/smr.2637","url":null,"abstract":"<p>Issue tracking systems are now prevalent in software development, which would help developers submit and discuss issues to solve development problems on software projects. Most previous studies have been conducted to analyze issue relations within projects, such as recommending similar or duplicate bug issues. However, along with the popularization of co-developing through multiple projects, many issues are cross-project correlated (CPC), that is, one issue is associated with another issue in a different project. When developers meet with CPC issues, it may primarily increase the difficulties of solving them because they need information from not only their projects but also other related projects that developers are not familiar with. Identifying a CPC issue as early as possible is a fundamental challenge for both managers and developers to allocate the resources for software maintenance and estimate the effort to solve it. This paper proposes 11 issue metrics of two groups to describe textual summary and reporters' activity, which can be extracted just after the issue was reported. We employ these 11 issue metrics to construct just-in-time (JIT) prediction models to identify CPC issues. To evaluate the effect of CPC issue prediction models, we conduct experiments on 16 open-source data science and deep learning projects and compare our prediction model with two baseline models based on textual features (i.e., Term Frequency-Inverse Document Frequency [TF-IDF] and Word Embedding), which are commonly adopted by previous studies on issue prediction. The results show that the JIT prediction model based on issue metrics has significantly improved the performance of CPC issue prediction under two evaluation indicators, Matthew's correlation coefficient (MCC) and F1. In addition, we find that the prediction model is more suitable for large-scale complex core projects in the open-source ecosystem.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139053160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhen Yang, Song Huang, Changyou Zheng, Xingya Wang, Yang Wang, Chunyan Xia
Recent advances in artificial intelligence technology and perception components have promoted the rapid development of autonomous vehicles. However, as safety-critical software, autonomous driving systems often make wrong judgments, seriously threatening human and property safety. LiDAR is one of the most critical sensors in autonomous vehicles, capable of accurately perceiving the three-dimensional information of the environment. Nevertheless, the high cost of manually collecting and labeling point cloud data leads to a dearth of testing methods for LiDAR-based perception modules. To bridge the critical gap, we introduce MetaLiDAR, a novel automated metamorphic testing methodology for LiDAR-based autonomous driving systems. First, we propose three object-level metamorphic relations for the domain characteristics of autonomous driving systems. Next, we design three transformation modules so that MetaLiDAR can generate natural-looking follow-up point clouds. Finally, we define corresponding evaluation metrics based on metamorphic relations. MetaLiDAR automatically determines whether source and follow-up test cases meet the metamorphic relations based on the evaluation metrics. Our empirical research on five state-of-the-art LiDAR-based object detection models shows that MetaLiDAR can not only generate natural-looking test point clouds to detect 181,547 inconsistent behaviors of different models but also significantly enhance the robustness of models by retraining with synthetic point clouds.
{"title":"MetaLiDAR: Automated metamorphic testing of LiDAR-based autonomous driving systems","authors":"Zhen Yang, Song Huang, Changyou Zheng, Xingya Wang, Yang Wang, Chunyan Xia","doi":"10.1002/smr.2644","DOIUrl":"10.1002/smr.2644","url":null,"abstract":"<p>Recent advances in artificial intelligence technology and perception components have promoted the rapid development of autonomous vehicles. However, as safety-critical software, autonomous driving systems often make wrong judgments, seriously threatening human and property safety. LiDAR is one of the most critical sensors in autonomous vehicles, capable of accurately perceiving the three-dimensional information of the environment. Nevertheless, the high cost of manually collecting and labeling point cloud data leads to a dearth of testing methods for LiDAR-based perception modules. To bridge the critical gap, we introduce MetaLiDAR, a novel automated metamorphic testing methodology for LiDAR-based autonomous driving systems. First, we propose three object-level metamorphic relations for the domain characteristics of autonomous driving systems. Next, we design three transformation modules so that MetaLiDAR can generate natural-looking follow-up point clouds. Finally, we define corresponding evaluation metrics based on metamorphic relations. MetaLiDAR automatically determines whether source and follow-up test cases meet the metamorphic relations based on the evaluation metrics. Our empirical research on five state-of-the-art LiDAR-based object detection models shows that MetaLiDAR can not only generate natural-looking test point clouds to detect 181,547 inconsistent behaviors of different models but also significantly enhance the robustness of models by retraining with synthetic point clouds.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138955778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The phenomenon of crossover cooperation and convergence among services has gained increasing attention in the modern service industry. Service boundaries have been expansively stretched into other domains rather than limited to their original domains to achieve value creation, fostering the emergence of crossover services. Consequently, a complex service ecosystem takes shape. However, there is a lack of the convergence-evolution mechanism of crossover services for the adaptive transformation of service providers' businesses in this context. To address this problem, this paper proposes population-based and community-based convergence-evolution patterns from the ecological perspective. Based on the analysis of these evolution patterns and the driven force of service evolution, we propose an ecology-oriented evolution analysis method. Furthermore, we devise an automated tool to support the evolution design of crossover service ecosystems. Case studies and evaluation experiments show the feasibility and effectiveness of our proposed method and the corresponding tool.
{"title":"An ecology-oriented convergence evolution analysis method of crossover service ecosystems","authors":"Yu Qiao, Jian Wang, Zhengli Liu, Wei Tang, Xiangfei Lu, Bing Li","doi":"10.1002/smr.2635","DOIUrl":"10.1002/smr.2635","url":null,"abstract":"<p>The phenomenon of crossover cooperation and convergence among services has gained increasing attention in the modern service industry. Service boundaries have been expansively stretched into other domains rather than limited to their original domains to achieve value creation, fostering the emergence of crossover services. Consequently, a complex service ecosystem takes shape. However, there is a lack of the convergence-evolution mechanism of crossover services for the adaptive transformation of service providers' businesses in this context. To address this problem, this paper proposes population-based and community-based convergence-evolution patterns from the ecological perspective. Based on the analysis of these evolution patterns and the driven force of service evolution, we propose an ecology-oriented evolution analysis method. Furthermore, we devise an automated tool to support the evolution design of crossover service ecosystems. Case studies and evaluation experiments show the feasibility and effectiveness of our proposed method and the corresponding tool.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138689405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industry 4.0 changes traditional manufacturing relationships from isolated optimized cells to fully integrated data and product flows across borders with its technological pillars. However, the transition to Industry 4.0 is not a straightforward journey in which organizations need assistance. A well-known approach that can be utilized during the early phases of the transition is to assess the capability of the organization. Maturity models are frequently used to improve capability. In this systematic literature review (SLR), we analyzed 22 maturity and readiness models based on 10 criteria: year, type, focus, structure, research methodology followed during the design of models, base frameworks, tool support, community support, objectivity, and extent of usage in practice. Our SLR provides a well-defined comparison for organizations to choose and apply available models. This SLR showed that (1) there is no widely accepted maturity/readiness model for Industry 4.0, as well as no international standard; (2) only a few models have received positive feedback from the industry, whereas most do not provide any practical usage information; and (3) the objectivity of the assessment method is controversial in most of the models. We have also identified a number of issues as open research areas for assessing readiness and maturity models for Industry 4.0.
{"title":"Readiness and maturity models for Industry 4.0: A systematic literature review","authors":"Hüseyin Ünlü, Onur Demirörs, Vahid Garousi","doi":"10.1002/smr.2641","DOIUrl":"10.1002/smr.2641","url":null,"abstract":"<p>Industry 4.0 changes traditional manufacturing relationships from isolated optimized cells to fully integrated data and product flows across borders with its technological pillars. However, the transition to Industry 4.0 is not a straightforward journey in which organizations need assistance. A well-known approach that can be utilized during the early phases of the transition is to assess the capability of the organization. Maturity models are frequently used to improve capability. In this systematic literature review (SLR), we analyzed 22 maturity and readiness models based on 10 criteria: year, type, focus, structure, research methodology followed during the design of models, base frameworks, tool support, community support, objectivity, and extent of usage in practice. Our SLR provides a well-defined comparison for organizations to choose and apply available models. This SLR showed that (1) there is no widely accepted maturity/readiness model for Industry 4.0, as well as no international standard; (2) only a few models have received positive feedback from the industry, whereas most do not provide any practical usage information; and (3) the objectivity of the assessment method is controversial in most of the models. We have also identified a number of issues as open research areas for assessing readiness and maturity models for Industry 4.0.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138689462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junxiao Han, Jiakun Liu, David Lo, Chen Zhi, Yishan Chen, Shuiguang Deng
Deep learning (DL) techniques have grown in leaps and bounds in both academia and industry over the past few years. Despite the growth of DL projects, there has been little study on how DL projects evolve, whether maintainers in this domain encounter a dramatic increase in workload and whether or not existing maintainers can guarantee the sustained development of projects. To address this gap, we perform an empirical study to investigate the sustainability of DL projects, understand maintainers' workloads and workloads growth in DL projects, and compare them with traditional open-source software (OSS) projects. In this regard, we first investigate how DL projects grow, then, understand maintainers' workload in DL projects, and explore the workload growth of maintainers as DL projects evolve. After that, we mine the relationships between maintainers' activities and the sustainability of DL projects. Eventually, we compare it with traditional OSS projects. Our study unveils that although DL projects show increasing trends in most activities, maintainers' workloads present a decreasing trend. Meanwhile, the proportion of workload maintainers conducted in DL projects is significantly lower than in traditional OSS projects. Moreover, there are positive and moderate correlations between the sustainability of DL projects and the number of maintainers' releases, pushes, and merged pull requests. Our findings shed lights that help understand maintainers' workload and growth trends in DL and traditional OSS projects and also highlight actionable directions for organizations, maintainers, and researchers.
{"title":"On the sustainability of deep learning projects: Maintainers' perspective","authors":"Junxiao Han, Jiakun Liu, David Lo, Chen Zhi, Yishan Chen, Shuiguang Deng","doi":"10.1002/smr.2645","DOIUrl":"10.1002/smr.2645","url":null,"abstract":"<p>Deep learning (DL) techniques have grown in leaps and bounds in both academia and industry over the past few years. Despite the growth of DL projects, there has been little study on how DL projects evolve, whether maintainers in this domain encounter a dramatic increase in workload and whether or not existing maintainers can guarantee the sustained development of projects. To address this gap, we perform an empirical study to investigate the sustainability of DL projects, understand maintainers' workloads and workloads growth in DL projects, and compare them with traditional open-source software (OSS) projects. In this regard, we first investigate how DL projects grow, then, understand maintainers' workload in DL projects, and explore the workload growth of maintainers as DL projects evolve. After that, we mine the relationships between maintainers' activities and the sustainability of DL projects. Eventually, we compare it with traditional OSS projects. Our study unveils that although DL projects show increasing trends in most activities, maintainers' workloads present a decreasing trend. Meanwhile, the proportion of workload maintainers conducted in DL projects is significantly lower than in traditional OSS projects. Moreover, there are positive and moderate correlations between the sustainability of DL projects and the number of maintainers' releases, pushes, and merged pull requests. Our findings shed lights that help understand maintainers' workload and growth trends in DL and traditional OSS projects and also highlight actionable directions for organizations, maintainers, and researchers.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138630588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergio Di Martino, Anna Rita Fasolino, Luigi Libero Lucio Starace, Porfirio Tramontana
Graphical user interface (GUI) testing plays a pivotal role in ensuring the quality and functionality of mobile apps. In this context, exploratory testing (ET), a distinctive methodology in which individual testers pursue a creative, and experience-based approach to test design, is often used as an alternative or in addition to traditional scripted testing. Managing the exploratory testing process is a challenging task that can easily result either in wasteful spending or in inadequate software quality, due to the relative unpredictability of exploratory testing activities, which depend on the skills and abilities of individual testers. A number of works have investigated the diversity of testers' performance when using ET strategies, often in a crowdtesting setting. These works, however, investigated ET effectiveness in detecting bugs, and not in scenarios in which the goal is to generate a re-executable test suite, as well. Moreover, less work has been conducted on evaluating the impact of adopting different exploratory testing strategies. As a first step toward filling this gap in the literature, in this work, we conduct an empirical evaluation involving four open-source Android apps and 20 masters students that we believe can be representative of practitioners partaking in exploratory testing activities. The students were asked to generate test suites for the apps using a capture and replay tool and different exploratory testing strategies. We then compare the effectiveness, in terms of aggregate code coverage that different-sized groups of students using different exploratory testing strategies may achieve. Results provide deeper insights into code coverage dynamics to project managers interested in using exploratory approaches to test simple Android apps, on which they can make more informed decisions.
{"title":"GUI testing of Android applications: Investigating the impact of the number of testers on different exploratory testing strategies","authors":"Sergio Di Martino, Anna Rita Fasolino, Luigi Libero Lucio Starace, Porfirio Tramontana","doi":"10.1002/smr.2640","DOIUrl":"10.1002/smr.2640","url":null,"abstract":"<p>Graphical user interface (GUI) testing plays a pivotal role in ensuring the quality and functionality of mobile apps. In this context, exploratory testing (ET), a distinctive methodology in which individual testers pursue a creative, and experience-based approach to test design, is often used as an alternative or in addition to traditional scripted testing. Managing the exploratory testing process is a challenging task that can easily result either in wasteful spending or in inadequate software quality, due to the relative unpredictability of exploratory testing activities, which depend on the skills and abilities of individual testers. A number of works have investigated the diversity of testers' performance when using ET strategies, often in a crowdtesting setting. These works, however, investigated ET effectiveness in detecting bugs, and not in scenarios in which the goal is to generate a re-executable test suite, as well. Moreover, less work has been conducted on evaluating the impact of adopting different exploratory testing strategies. As a first step toward filling this gap in the literature, in this work, we conduct an empirical evaluation involving four open-source Android apps and 20 masters students that we believe can be representative of practitioners partaking in exploratory testing activities. The students were asked to generate test suites for the apps using a capture and replay tool and different exploratory testing strategies. We then compare the effectiveness, in terms of aggregate code coverage that different-sized groups of students using different exploratory testing strategies may achieve. Results provide deeper insights into code coverage dynamics to project managers interested in using exploratory approaches to test simple Android apps, on which they can make more informed decisions.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.2640","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138630585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Open source software (OSS) follows a software development paradigm whereby self-motivated volunteers scattered all around the globe contribute to the development in the form of code, documentation, feedback, feature recommendations, bug reporting, and bug resolution. These volunteers, commonly referred to as OSS project community, serve as the foundation of the OSS project, fostering its creation and sustenance and providing long-term support. The quality and sustainability of the OSS project is reliant upon the development and structure of the self-governing community. When a business organization plans to acquire an OSS solution, it not only takes into consideration the factors such as reliability, security, and scalability but also attaches significant importance to the likelihood of the OSS project being maintained and supported in the future so that it can rely on it as a stable and secure technology solution. Modern cloud-based software hosting platforms, such as GitHub, offer a range of options for automatically and freely accessing the complete development history of millions of OSS projects. This easy availability of detailed development history has enabled researchers to analyze and draw quantitative and scientific inferences about the quality of an OSS project which generally involves assessing three aspects, namely, software product, development process, and project community. With focus on project community part, a Framework for Assessment and Ranking of OSS Community is being presented in the current research work, following a detailed examination of the largest source code hosting and project collaboration platform, GitHub. Technique for Order of Preference by Similarity to Ideal Solution from Multi-Criteria Decision-Making toolkit has been utilized for assessing the quality of the project community. The framework has been validated by applying it on nine OSS projects and comparing the results with the outcomes obtained through an existing OSS evaluation methodology. The comparative analysis demonstrated that the proposed framework aligns with the aforementioned evaluation methodology while offering an opportunity for in-depth analysis on the dynamics of volunteer communities, which is lacking in previous evaluation methods. These insights can prove valuable for both potential adopters and project maintainers, aiding them in making informed strategic decisions.
{"title":"The vital role of community in open source software development: A framework for assessment and ranking","authors":"Jaswinder Singh, Anu Gupta, Preet Kanwal","doi":"10.1002/smr.2643","DOIUrl":"10.1002/smr.2643","url":null,"abstract":"<p>Open source software (OSS) follows a software development paradigm whereby self-motivated volunteers scattered all around the globe contribute to the development in the form of code, documentation, feedback, feature recommendations, bug reporting, and bug resolution. These volunteers, commonly referred to as OSS project community, serve as the foundation of the OSS project, fostering its creation and sustenance and providing long-term support. The quality and sustainability of the OSS project is reliant upon the development and structure of the self-governing community. When a business organization plans to acquire an OSS solution, it not only takes into consideration the factors such as reliability, security, and scalability but also attaches significant importance to the likelihood of the OSS project being maintained and supported in the future so that it can rely on it as a stable and secure technology solution. Modern cloud-based software hosting platforms, such as GitHub, offer a range of options for automatically and freely accessing the complete development history of millions of OSS projects. This easy availability of detailed development history has enabled researchers to analyze and draw quantitative and scientific inferences about the quality of an OSS project which generally involves assessing three aspects, namely, software product, development process, and project community. With focus on project community part, a Framework for Assessment and Ranking of OSS Community is being presented in the current research work, following a detailed examination of the largest source code hosting and project collaboration platform, GitHub. Technique for Order of Preference by Similarity to Ideal Solution from Multi-Criteria Decision-Making toolkit has been utilized for assessing the quality of the project community. The framework has been validated by applying it on nine OSS projects and comparing the results with the outcomes obtained through an existing OSS evaluation methodology. The comparative analysis demonstrated that the proposed framework aligns with the aforementioned evaluation methodology while offering an opportunity for in-depth analysis on the dynamics of volunteer communities, which is lacking in previous evaluation methods. These insights can prove valuable for both potential adopters and project maintainers, aiding them in making informed strategic decisions.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138563876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p>Over the last few years, big data have emerged as a paradigm for processing and analyzing a large volume of data. Coupled with other paradigms, such as cloud computing, service computing, and Internet of Things, big data processing takes advantage of the underlying cloud infrastructure, which allows hosting and managing massive amounts of data, while service computing allows to process and deliver various data sources as on-demand services. This synergy between multiple paradigms has led to the emergence of <i>big services</i>, as a cross-domain, large-scale, and big data-centric service model. Apart from the adaptation issues (e.g., need of high reaction to changes) inherited from other service models, the massiveness and heterogeneity of big services add a new factor of complexity to the way such a large-scale service ecosystem is managed in case of execution deviations. Indeed, big services are often subject to frequent deviations at both the functional (e.g., service failure, QoS degradation, and IoT resource unavailability) and data (e.g., data source unavailability or access restrictions) levels. Handling these execution problems is beyond the capacity of traditional web/cloud service management tools, and the majority of big service approaches have targeted specific management operations, such as selection and composition. To maintain a moderate state and high quality of their cross-domain execution, big services should be continuously monitored and managed in a scalable and autonomous way. To cope with the absence of self-management frameworks for large-scale services, the goal of this work is to design an autonomic management solution that takes the whole control of big services in an autonomous and distributed lifecycle process. We combine autonomic computing and big data processing paradigms to endow big services with <i>self-</i>* and <i>parallel processing</i> capabilities. The proposed management framework takes advantage of the well-known MapReduce programming model and Apache Spark and manages big service's related data using <i>knowledge graph technology</i>. We also define a <i>scalable embedding model</i> that allows processing and learning latent big service knowledge in a distributed manner. Finally, a <i>cooperative decision mechanism</i> is defined to trigger non-conflicting management policies in response to the captured deviations of the running big service. Big services' management tasks (monitoring, embedding, and decision), as well as the core modules (autonomic managers' controller, embedding module, and coordinator), are implemented on top of Apache Spark as MapReduce jobs, while the processed data are represented as resilient distributed dataset (RDD) structures. To exploit the shared information exchanged between the workers and the master node (coordinator), and for further resolution of conflicts between management policies, we endowed the proposed framework with a lightweight communication mechanism that allo
{"title":"On the use of big data frameworks in big service management","authors":"Fedia Ghedass, Faouzi Ben Charrada","doi":"10.1002/smr.2642","DOIUrl":"10.1002/smr.2642","url":null,"abstract":"<p>Over the last few years, big data have emerged as a paradigm for processing and analyzing a large volume of data. Coupled with other paradigms, such as cloud computing, service computing, and Internet of Things, big data processing takes advantage of the underlying cloud infrastructure, which allows hosting and managing massive amounts of data, while service computing allows to process and deliver various data sources as on-demand services. This synergy between multiple paradigms has led to the emergence of <i>big services</i>, as a cross-domain, large-scale, and big data-centric service model. Apart from the adaptation issues (e.g., need of high reaction to changes) inherited from other service models, the massiveness and heterogeneity of big services add a new factor of complexity to the way such a large-scale service ecosystem is managed in case of execution deviations. Indeed, big services are often subject to frequent deviations at both the functional (e.g., service failure, QoS degradation, and IoT resource unavailability) and data (e.g., data source unavailability or access restrictions) levels. Handling these execution problems is beyond the capacity of traditional web/cloud service management tools, and the majority of big service approaches have targeted specific management operations, such as selection and composition. To maintain a moderate state and high quality of their cross-domain execution, big services should be continuously monitored and managed in a scalable and autonomous way. To cope with the absence of self-management frameworks for large-scale services, the goal of this work is to design an autonomic management solution that takes the whole control of big services in an autonomous and distributed lifecycle process. We combine autonomic computing and big data processing paradigms to endow big services with <i>self-</i>* and <i>parallel processing</i> capabilities. The proposed management framework takes advantage of the well-known MapReduce programming model and Apache Spark and manages big service's related data using <i>knowledge graph technology</i>. We also define a <i>scalable embedding model</i> that allows processing and learning latent big service knowledge in a distributed manner. Finally, a <i>cooperative decision mechanism</i> is defined to trigger non-conflicting management policies in response to the captured deviations of the running big service. Big services' management tasks (monitoring, embedding, and decision), as well as the core modules (autonomic managers' controller, embedding module, and coordinator), are implemented on top of Apache Spark as MapReduce jobs, while the processed data are represented as resilient distributed dataset (RDD) structures. To exploit the shared information exchanged between the workers and the master node (coordinator), and for further resolution of conflicts between management policies, we endowed the proposed framework with a lightweight communication mechanism that allo","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138527141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Vacca, Michele Fredella, Andrea Di Sorbo, Corrado A. Visaggio, Mario Piattini
Blockchain is a cross-cutting technology allowing interactions among untrusted entities in a distributed manner without the need for involving a trusted third party. Smart contracts (i.e., programs running on the blockchain) enabled organizations to envision and implement solutions to real-world problems in less cost and time. Given the immutability of blockchain and the lack of best practices for properly designing and developing smart contracts, it is crucial to assure smart contract quality before deployment. With the help of an exploratory survey involving developers and researchers, this paper identifies the practices and tools used to develop, implement, and evaluate smart contracts. The survey received 55 valid responses. Such responses indicate that (i) inefficiencies may occur during the development cycle of a smart contract, especially regarding requirements specification, design, and testing phases, and (ii) the lack of a shared standard to evaluate the functional quality of implemented smart contracts. To start coping with these issues, the adoption of functional suitability assessment measures recommended by the ISO/IEC 25000 standard, widely used in software engineering, is proposed by adapting them to the context of smart contracts. Through some examples, the manuscript also illustrates how to measure the functional completeness and correctness of smart contracts. The proposed procedure to measure smart contract functional suitability brings advantages to both developers and users of decentralized finance or non-fungible tokens platforms, data marketplaces, or shipping and real estate services, just to mention a few. In particular, it helps (i) better outline the responsibilities of smart contracts, (ii) uncover errors and deficiencies of smart contracts in the early stages, and (iii) ensure that the established requirements are met.
{"title":"Functional suitability assessment of smart contracts: A survey and first proposal","authors":"Anna Vacca, Michele Fredella, Andrea Di Sorbo, Corrado A. Visaggio, Mario Piattini","doi":"10.1002/smr.2636","DOIUrl":"10.1002/smr.2636","url":null,"abstract":"<p>Blockchain is a cross-cutting technology allowing interactions among untrusted entities in a distributed manner without the need for involving a trusted third party. Smart contracts (i.e., programs running on the blockchain) enabled organizations to envision and implement solutions to real-world problems in less cost and time. Given the immutability of blockchain and the lack of best practices for properly designing and developing smart contracts, it is crucial to assure smart contract quality before deployment. With the help of an exploratory survey involving developers and researchers, this paper identifies the practices and tools used to develop, implement, and evaluate smart contracts. The survey received 55 valid responses. Such responses indicate that (i) inefficiencies may occur during the development cycle of a smart contract, especially regarding requirements specification, design, and testing phases, and (ii) the lack of a shared standard to evaluate the functional quality of implemented smart contracts. To start coping with these issues, the adoption of functional suitability assessment measures recommended by the ISO/IEC 25000 standard, widely used in software engineering, is proposed by adapting them to the context of smart contracts. Through some examples, the manuscript also illustrates how to measure the functional completeness and correctness of smart contracts. The proposed procedure to measure smart contract functional suitability brings advantages to both developers and users of decentralized finance or non-fungible tokens platforms, data marketplaces, or shipping and real estate services, just to mention a few. In particular, it helps (i) better outline the responsibilities of smart contracts, (ii) uncover errors and deficiencies of smart contracts in the early stages, and (iii) ensure that the established requirements are met.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.2636","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138527148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Design patterns represent a means of communicating reusable solutions to common problems, provided they are implemented and maintained correctly. However, many design pattern instances erode as they age, sacrificing qualities they once provided. Identifying such instances of pattern decay is valuable because it allows for proactive attempts to extend the longevity and quality attributes of pattern components. Apart from structural decay, design patterns can exhibit symptoms of behavioral decay. We utilized a taxonomy that characterizes these negative behaviors and designed a case study wherein we measured structural and behavioral decay, hereafter referred to as pattern grime, as well as pattern quality and size, across pattern evolutions. We evaluated the relationships between structural and behavioral grime and found statistically significant cases of strong correlations between specific types of structural and behavioral grime. Furthermore, we extended the QATCH operational software quality model to incorporate design pattern evolution metrics and measured and correlated software quality to the presence of behavioral grime in software systems. Our results suggest a strong inverse relationship between software quality and behavioral grime.
{"title":"A study of behavioral decay in design patterns","authors":"Derek Reimanis, Clemente Izurieta","doi":"10.1002/smr.2638","DOIUrl":"10.1002/smr.2638","url":null,"abstract":"<p>Design patterns represent a means of communicating reusable solutions to common problems, provided they are implemented and maintained correctly. However, many design pattern instances erode as they age, sacrificing qualities they once provided. Identifying such instances of pattern decay is valuable because it allows for proactive attempts to extend the longevity and quality attributes of pattern components. Apart from structural decay, design patterns can exhibit symptoms of behavioral decay. We utilized a taxonomy that characterizes these negative behaviors and designed a case study wherein we measured structural and behavioral decay, hereafter referred to as pattern grime, as well as pattern quality and size, across pattern evolutions. We evaluated the relationships between structural and behavioral grime and found statistically significant cases of strong correlations between specific types of structural and behavioral grime. Furthermore, we extended the QATCH operational software quality model to incorporate design pattern evolution metrics and measured and correlated software quality to the presence of behavioral grime in software systems. Our results suggest a strong inverse relationship between software quality and behavioral grime.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"36 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138527154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}