Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00055
M. Stochel, P. Chołda, Mariusz R. Wawrowski
The constantly growing amount of software in use, accompanied by huge amount of technical debt, gradually raises concern in the industry. New technologies and software development processes become yet another degree of freedom boosting the complexity. As the software development and delivery techniques evolve, technical debt perspective should follow. Taking into account all software artefacts enabling value delivery to customers, embracing DevOps paradigm and its holistic focus on software development lifecycle, the strategy presented in this paper enabled stabilization of a large telecommunication software system after a set of consecutive complex merges. The research question of this paper looks for evidence whether prioritization of technical debt mitigation efforts bring a faster return on investment. A 2-year-long case study focused on technical debt prioritization and mitigation that was conducted on this software system resulted in improved quality and stabilization of feature development efforts (cost and time based). Therefore, the tangible gains from applying this approach comprise over 50% decrease in stability issues, improved screening by over 30%, and 6 times better predictability of delivery time (reducing allocation of stabilization effort and time).
{"title":"Adopting DevOps Paradigm in Technical Debt Prioritization and Mitigation","authors":"M. Stochel, P. Chołda, Mariusz R. Wawrowski","doi":"10.1109/SEAA56994.2022.00055","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00055","url":null,"abstract":"The constantly growing amount of software in use, accompanied by huge amount of technical debt, gradually raises concern in the industry. New technologies and software development processes become yet another degree of freedom boosting the complexity. As the software development and delivery techniques evolve, technical debt perspective should follow. Taking into account all software artefacts enabling value delivery to customers, embracing DevOps paradigm and its holistic focus on software development lifecycle, the strategy presented in this paper enabled stabilization of a large telecommunication software system after a set of consecutive complex merges. The research question of this paper looks for evidence whether prioritization of technical debt mitigation efforts bring a faster return on investment. A 2-year-long case study focused on technical debt prioritization and mitigation that was conducted on this software system resulted in improved quality and stabilization of feature development efforts (cost and time based). Therefore, the tangible gains from applying this approach comprise over 50% decrease in stability issues, improved screening by over 30%, and 6 times better predictability of delivery time (reducing allocation of stabilization effort and time).","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115449682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00041
Asad Ali, C. Gravino
Several studies have raised concerns about the performance of estimation techniques if employed with default parameters provided by specific development toolkits, e.g., Weka. In this paper, we evaluate the impact of parameter optimization with nine different estimation techniques in the Software Development Effort Estimation (SDEE) and Software Fault Prediction (SFP) domains to provide more generic findings of the impact of parameter optimization. To this aim, we employ three datasets from the domain of SDEE (China, Maxwell, Nasa) and three different regression-based datasets from the SFP domain (Ant, Xalan, Xerces). Regarding parameter optimization, we consider four optimization algorithms from different families: Grid Search and Random Search, Simulated Annealing, and Bayesian Optimization. The estimation techniques are: Support Vector Machine, Random Forest, Classification and Regression Tree, Neural Networks, Averaged Neural Networks, k-Nearest Neighbor, Partial Least Square, MultiLayer Perceptron, and Gradient Boosting Machine. Results reveal that, with both SDEE and SFP datasets, seven out of nine estimation techniques require optimization/configuration of at least one parameter. In majority of the cases, the parameters of the employed estimation techniques are sensitive to the optimization of specific types of data. Moreover, not all the parameters need to be optimized as some of them are not sensitive to optimization.
{"title":"The Impact of Parameters Optimization in Software Prediction Models","authors":"Asad Ali, C. Gravino","doi":"10.1109/SEAA56994.2022.00041","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00041","url":null,"abstract":"Several studies have raised concerns about the performance of estimation techniques if employed with default parameters provided by specific development toolkits, e.g., Weka. In this paper, we evaluate the impact of parameter optimization with nine different estimation techniques in the Software Development Effort Estimation (SDEE) and Software Fault Prediction (SFP) domains to provide more generic findings of the impact of parameter optimization. To this aim, we employ three datasets from the domain of SDEE (China, Maxwell, Nasa) and three different regression-based datasets from the SFP domain (Ant, Xalan, Xerces). Regarding parameter optimization, we consider four optimization algorithms from different families: Grid Search and Random Search, Simulated Annealing, and Bayesian Optimization. The estimation techniques are: Support Vector Machine, Random Forest, Classification and Regression Tree, Neural Networks, Averaged Neural Networks, k-Nearest Neighbor, Partial Least Square, MultiLayer Perceptron, and Gradient Boosting Machine. Results reveal that, with both SDEE and SFP datasets, seven out of nine estimation techniques require optimization/configuration of at least one parameter. In majority of the cases, the parameters of the employed estimation techniques are sensitive to the optimization of specific types of data. Moreover, not all the parameters need to be optimized as some of them are not sensitive to optimization.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116875515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00029
Jean Malm, Eduard Paul Enoiu, Masud Abu Naser, B. Lisper, Z. Porkoláb, Sigrid Eldh
In recent years, maintaining test code quality has gained more attention due to increased automation and the growing focus on issues caused during this process.Test code may become long and complex, but maintaining its quality is mostly a manual process, that may not scale in big software projects. Moreover, bugs in test code may give a false impression about the correctness or performance of the production code. Static program analysis (SPA) tools are being used to maintain the quality of software projects nowadays. However, these tools are either not used to analyse test code, or any analysis results on the test code are suppressed.This is especially true since SPA tools are not tailored to generate precise warnings on test code. This paper investigates the use of SPA on test code by employing three state-of-the-art general-purpose static analysers on a curated set of projects used in the industry and a random sample of relatively popular and large open-source C/C++ projects. We have found a number of built-in code checking modules that can detect quality issues in the test code. However, these checkers need some tailoring to obtain relevant results. We observed design choices in test frameworks that raise noisy warnings in analysers and propose a set of augmentations to the checkers or the analysis framework to obtain precise warnings from static analysers.
{"title":"An Evaluation of General-Purpose Static Analysis Tools on C/C++ Test Code","authors":"Jean Malm, Eduard Paul Enoiu, Masud Abu Naser, B. Lisper, Z. Porkoláb, Sigrid Eldh","doi":"10.1109/SEAA56994.2022.00029","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00029","url":null,"abstract":"In recent years, maintaining test code quality has gained more attention due to increased automation and the growing focus on issues caused during this process.Test code may become long and complex, but maintaining its quality is mostly a manual process, that may not scale in big software projects. Moreover, bugs in test code may give a false impression about the correctness or performance of the production code. Static program analysis (SPA) tools are being used to maintain the quality of software projects nowadays. However, these tools are either not used to analyse test code, or any analysis results on the test code are suppressed.This is especially true since SPA tools are not tailored to generate precise warnings on test code. This paper investigates the use of SPA on test code by employing three state-of-the-art general-purpose static analysers on a curated set of projects used in the industry and a random sample of relatively popular and large open-source C/C++ projects. We have found a number of built-in code checking modules that can detect quality issues in the test code. However, these checkers need some tailoring to obtain relevant results. We observed design choices in test frameworks that raise noisy warnings in analysers and propose a set of augmentations to the checkers or the analysis framework to obtain precise warnings from static analysers.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127152602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00049
Francesco Lomio, L. Pascarella, Fabio Palomba, Valentina Lenarduzzi
Fine-grained just-in-time defect prediction aims at identifying likely defective files within new commits. Popular techniques are based on supervised learning, where machine learning algorithms are fed with historical data. One of the limitations of these techniques is concerned with the use of imbalanced data that only contain a few defective samples to enable a proper learning phase. To overcome this problem, recent work has shown that anomaly detection can be used as an alternative. With our study, we aim at assessing how anomaly detection can be employed for the problem of fine-grained just-in-time defect prediction. We conduct an empirical investigation on 32 open-source projects, designing and evaluating three anomaly detection methods for fine-grained just-in-time defect prediction. Our results do not show significant advantages that justify the benefit of anomaly detection over machine learning approaches.
{"title":"Regularity or Anomaly? On The Use of Anomaly Detection for Fine-Grained JIT Defect Prediction","authors":"Francesco Lomio, L. Pascarella, Fabio Palomba, Valentina Lenarduzzi","doi":"10.1109/SEAA56994.2022.00049","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00049","url":null,"abstract":"Fine-grained just-in-time defect prediction aims at identifying likely defective files within new commits. Popular techniques are based on supervised learning, where machine learning algorithms are fed with historical data. One of the limitations of these techniques is concerned with the use of imbalanced data that only contain a few defective samples to enable a proper learning phase. To overcome this problem, recent work has shown that anomaly detection can be used as an alternative. With our study, we aim at assessing how anomaly detection can be employed for the problem of fine-grained just-in-time defect prediction. We conduct an empirical investigation on 32 open-source projects, designing and evaluating three anomaly detection methods for fine-grained just-in-time defect prediction. Our results do not show significant advantages that justify the benefit of anomaly detection over machine learning approaches.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116241850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00053
G. Lami, G. Spagnolo
The suppliers of software-intensive electronic automotive components are facing technical challenges due to the innovation rush and the growing time pressure from customers. As the quality of on-board automotive electronic systems is strongly dependent on the quality of their development practices, car manufacturers and suppliers proactively focus on improving technical and organizational processes. Automotive SPICE (ASPICE) is today the reference standard for assessing and improving automotive electronics processes and projects in this setting. As car manufacturers use ASPICE to qualify their suppliers of software-intensive systems, such a standard becomes a market demand. This paper identifies and discusses the benefits and impact of the integration and harmonization of Technical Debt Management (TDM) in an ASPICE-compliant software development project. Besides this paper provides a conceptual framework and a reference process description for the integration of ASPICE and TDM practices in a sample Software Engineering process.
{"title":"Technical Debt Management in Automotive Software Industry","authors":"G. Lami, G. Spagnolo","doi":"10.1109/SEAA56994.2022.00053","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00053","url":null,"abstract":"The suppliers of software-intensive electronic automotive components are facing technical challenges due to the innovation rush and the growing time pressure from customers. As the quality of on-board automotive electronic systems is strongly dependent on the quality of their development practices, car manufacturers and suppliers proactively focus on improving technical and organizational processes. Automotive SPICE (ASPICE) is today the reference standard for assessing and improving automotive electronics processes and projects in this setting. As car manufacturers use ASPICE to qualify their suppliers of software-intensive systems, such a standard becomes a market demand. This paper identifies and discusses the benefits and impact of the integration and harmonization of Technical Debt Management (TDM) in an ASPICE-compliant software development project. Besides this paper provides a conceptual framework and a reference process description for the integration of ASPICE and TDM practices in a sample Software Engineering process.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121999793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00035
Stefano Lambiase, Gemma Catolino, Fabiano Pecorelli, D. Tamburri, Fabio Palomba, Willem-Jan van den Heuvel, F. Ferrucci
Estimating and understanding productivity still represents a crucial task for researchers and practitioners. Researchers spent significant effort identifying the factors that influence software developers’ productivity, providing several approaches for analyzing and predicting such a metric. Although different works focused on evaluating the impact of human factors on productivity, little is known about the influence of cultural/geographical diversity in software development communities. Indeed, in previous studies, researchers treated cultural aspects like an abstract concept without providing a quantitative representation. This work provides an empirical assessment of the relationship between cultural and geographical dispersion of a development community—namely, how diverse a community is in terms of cultural attitudes and geographical collocation of the members who belong to it—and its productivity. To reach our aim, we built a statistical model that contained product and socio-technical factors as independent variables to assess the correlation with productivity, i.e., the number of commits performed in a given time. Then, we ran our model considering data of 25 open-source communities on GitHub. Results of our study indicate that cultural and geographical dispersion impact productivity, thus encouraging managers and practitioners to consider such aspects during all the phases of the software development lifecycle.
{"title":"“There and Back Again?” On the Influence of Software Community Dispersion Over Productivity","authors":"Stefano Lambiase, Gemma Catolino, Fabiano Pecorelli, D. Tamburri, Fabio Palomba, Willem-Jan van den Heuvel, F. Ferrucci","doi":"10.1109/SEAA56994.2022.00035","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00035","url":null,"abstract":"Estimating and understanding productivity still represents a crucial task for researchers and practitioners. Researchers spent significant effort identifying the factors that influence software developers’ productivity, providing several approaches for analyzing and predicting such a metric. Although different works focused on evaluating the impact of human factors on productivity, little is known about the influence of cultural/geographical diversity in software development communities. Indeed, in previous studies, researchers treated cultural aspects like an abstract concept without providing a quantitative representation. This work provides an empirical assessment of the relationship between cultural and geographical dispersion of a development community—namely, how diverse a community is in terms of cultural attitudes and geographical collocation of the members who belong to it—and its productivity. To reach our aim, we built a statistical model that contained product and socio-technical factors as independent variables to assess the correlation with productivity, i.e., the number of commits performed in a given time. Then, we ran our model considering data of 25 open-source communities on GitHub. Results of our study indicate that cultural and geographical dispersion impact productivity, thus encouraging managers and practitioners to consider such aspects during all the phases of the software development lifecycle.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124293808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00022
Hamdy Michael Ayas, Hartmut Fischer, P. Leitner, F. D. O. Neto
Testing has a prominent role in revealing faults in software based on microservices. One of the most important discussion points in MSAs is the granularity of services, often in different levels of abstraction. Similarly, the granularity of tests in MSAs is reflected in different test types. However, it is challenging to conceptualize how the overall testing architecture comes together when combining testing in different levels of abstraction for microservices. There is no empirical evidence on the overall testing architecture in such microservices implementations. Furthermore, there is a need to empirically understand how the current state of practice resonates with existing best practices on testing. In this study, we mine Github to find different candidate projects for an in-depth, qualitative assessment of their test artifacts. We analyze 16 repositories that use microservices and include various test artifacts. We focus on four projects that use consumer-driven-contract testing. Our results demonstrate how these projects cover different levels of testing. This study (i) drafts a testing architecture including activities and artifacts, and (ii) demonstrates how these align with best practices and guidelines. Our proposed architecture helps the categorization of system and test artifacts in empirical studies of microservices. Finally, we showcase a view of the boundaries between different levels of testing in systems using microservices.
{"title":"An Empirical Analysis of Microservices Systems Using Consumer-Driven Contract Testing","authors":"Hamdy Michael Ayas, Hartmut Fischer, P. Leitner, F. D. O. Neto","doi":"10.1109/SEAA56994.2022.00022","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00022","url":null,"abstract":"Testing has a prominent role in revealing faults in software based on microservices. One of the most important discussion points in MSAs is the granularity of services, often in different levels of abstraction. Similarly, the granularity of tests in MSAs is reflected in different test types. However, it is challenging to conceptualize how the overall testing architecture comes together when combining testing in different levels of abstraction for microservices. There is no empirical evidence on the overall testing architecture in such microservices implementations. Furthermore, there is a need to empirically understand how the current state of practice resonates with existing best practices on testing. In this study, we mine Github to find different candidate projects for an in-depth, qualitative assessment of their test artifacts. We analyze 16 repositories that use microservices and include various test artifacts. We focus on four projects that use consumer-driven-contract testing. Our results demonstrate how these projects cover different levels of testing. This study (i) drafts a testing architecture including activities and artifacts, and (ii) demonstrates how these align with best practices and guidelines. Our proposed architecture helps the categorization of system and test artifacts in empirical studies of microservices. Finally, we showcase a view of the boundaries between different levels of testing in systems using microservices.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115786917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00063
A. Nouri, C. Berger, Fredrik Törner
Using continuous development, deployment, and monitoring (CDDM) to understand and improve applications in a customer’s context is widely used for non-safety applications such as smartphone apps or web applications to enable rapid and innovative feature improvements. Having demonstrated its potential in such domains, it may have the potential to also improve the software development for automotive functions as some OEMs described on a high level in their financial company communiqués. However, the application of a CDDM strategy also faces challenges from a process adherence and documentation perspective as required by safety-related products such as autonomous driving systems (ADS) and guided by industry standards such as ISO-26262 [1] and ISO21448 [2]. There are publications on CDDM in safety-relevant contexts that focus on safety-critical functions on a rather generic level and thus, not specifically ADS or automotive, or that are concentrating only on software and hence, missing out the particular context of an automotive OEM: Well-established legacy processes and the need of their adaptations, and aspects originating from the role of being a system integrator for software/software, hardware/hardware, and hardware/software. In this paper, particular challenges from the automotive domain to better adopt CDDM are identified and discussed to shed light on research gaps to enhance CDDM, especially for the software development of safe ADS. The challenges are identified from today’s industrial well-established ways of working by conducting interviews with domain experts and complemented by a literature study.
{"title":"An Industrial Experience Report about Challenges from Continuous Monitoring, Improvement, and Deployment for Autonomous Driving Features","authors":"A. Nouri, C. Berger, Fredrik Törner","doi":"10.1109/SEAA56994.2022.00063","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00063","url":null,"abstract":"Using continuous development, deployment, and monitoring (CDDM) to understand and improve applications in a customer’s context is widely used for non-safety applications such as smartphone apps or web applications to enable rapid and innovative feature improvements. Having demonstrated its potential in such domains, it may have the potential to also improve the software development for automotive functions as some OEMs described on a high level in their financial company communiqués. However, the application of a CDDM strategy also faces challenges from a process adherence and documentation perspective as required by safety-related products such as autonomous driving systems (ADS) and guided by industry standards such as ISO-26262 [1] and ISO21448 [2]. There are publications on CDDM in safety-relevant contexts that focus on safety-critical functions on a rather generic level and thus, not specifically ADS or automotive, or that are concentrating only on software and hence, missing out the particular context of an automotive OEM: Well-established legacy processes and the need of their adaptations, and aspects originating from the role of being a system integrator for software/software, hardware/hardware, and hardware/software. In this paper, particular challenges from the automotive domain to better adopt CDDM are identified and discussed to shed light on research gaps to enhance CDDM, especially for the software development of safe ADS. The challenges are identified from today’s industrial well-established ways of working by conducting interviews with domain experts and complemented by a literature study.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115919156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00046
Z. Alizadehsani, Daniel Feitosa, Theodoros Maikantis, Apostolos Ampatzoglou, A. Chatzigeorgiou, David Berrocal-Macías, Alfonso González-Briones, J. Corchado, Márcio Mateus, Johannes Groenewold
Developing software based on services is one of the most emerging programming paradigms in software development. Service-based software development relies on the composition of services (i.e., pieces of code already built and deployed in the cloud) through orchestrated API calls. Black-box reuse can play a prominent role when using this programming paradigm, in the sense that identifying and reusing already existing/deployed services can save substantial development effort. According to the literature, identifying reusable assets (i.e., components, classes, or services) is more successful and efficient when the discovery process is domain-specific. To facilitate domain-specific service discovery, we propose a service classification approach that can categorize services to an application domain, given only the service description. To validate the accuracy of our classification approach, we have trained a machine-learning model on thousands of open-source services and tested it on 67 services developed within two companies employing service-based software development. The study results suggest that the classification algorithm can perform adequately in a test set that does not overlap with the training set; thus, being (with some confidence) transferable to other industrial cases. Additionally, we expand the body of knowledge on software categorization by highlighting sets of domains that consist ‘grey-zones’ in service classification.
{"title":"Service Classification through Machine Learning: Aiding in the Efficient Identification of Reusable Assets in Cloud Application Development","authors":"Z. Alizadehsani, Daniel Feitosa, Theodoros Maikantis, Apostolos Ampatzoglou, A. Chatzigeorgiou, David Berrocal-Macías, Alfonso González-Briones, J. Corchado, Márcio Mateus, Johannes Groenewold","doi":"10.1109/SEAA56994.2022.00046","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00046","url":null,"abstract":"Developing software based on services is one of the most emerging programming paradigms in software development. Service-based software development relies on the composition of services (i.e., pieces of code already built and deployed in the cloud) through orchestrated API calls. Black-box reuse can play a prominent role when using this programming paradigm, in the sense that identifying and reusing already existing/deployed services can save substantial development effort. According to the literature, identifying reusable assets (i.e., components, classes, or services) is more successful and efficient when the discovery process is domain-specific. To facilitate domain-specific service discovery, we propose a service classification approach that can categorize services to an application domain, given only the service description. To validate the accuracy of our classification approach, we have trained a machine-learning model on thousands of open-source services and tested it on 67 services developed within two companies employing service-based software development. The study results suggest that the classification algorithm can perform adequately in a test set that does not overlap with the training set; thus, being (with some confidence) transferable to other industrial cases. Additionally, we expand the body of knowledge on software categorization by highlighting sets of domains that consist ‘grey-zones’ in service classification.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114328479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SEAA56994.2022.00079
Lucas Maciel, Alice Oliveira, Riei Rodrigues, Williams Santiago, A. Silva, Gustavo Carvalho, Breno Miranda
Context: Test automation is often seen as a possible solution to overcome the challenges of testing mobile devices. However, most of the automation techniques adopted for mobile testing are intrusive and, sometimes, unrealistic. One possible solution for coping with intrusive and unrealistic testing is the use of robots. Despite the growing interest in the intersection between robotics and software testing, the motivations, the usefulness, and the return of investment of adopting robots for supporting testing activities are not clear. Objective: We aim at surveying the literature on the use of robotics for supporting mobile testing with a focus on the motivations, the types of tests that are automated, and the reported effectiveness/efficiency. Method: We conduct a systematic mapping study on robotic testing of mobile devices (hereafter, referred as robotic mobile testing). We searched primary studies published since 2000 by querying five digital libraries, and by performing backward and forward snowballing cycles. Results: We started with a set of 1353 papers and after applying our study protocol, we selected a final set of 20 primary studies. We provide both a quantitative analysis, and a qualitative evaluation of the motivations, types of tests automated and the effectiveness/efficiency reported by the selected studies. Conclusions: Based on the selected studies, allowing more realistic interactions is among the main motivations for adopting robotic mobile testing. The tests automated with the support of robots are usually system-level tests targeting stress, interface, and performance testing. More empirical evidence is needed for supporting the claimed benefits. Most of the surveyed work do not compare the effectiveness and efficiency of the proposed robotics-based approach against traditional automation techniques. We discuss the implications of our findings for researchers and practitioners, and outline a research agenda.
{"title":"A Systematic Mapping Study on Robotic Testing of Mobile Devices","authors":"Lucas Maciel, Alice Oliveira, Riei Rodrigues, Williams Santiago, A. Silva, Gustavo Carvalho, Breno Miranda","doi":"10.1109/SEAA56994.2022.00079","DOIUrl":"https://doi.org/10.1109/SEAA56994.2022.00079","url":null,"abstract":"Context: Test automation is often seen as a possible solution to overcome the challenges of testing mobile devices. However, most of the automation techniques adopted for mobile testing are intrusive and, sometimes, unrealistic. One possible solution for coping with intrusive and unrealistic testing is the use of robots. Despite the growing interest in the intersection between robotics and software testing, the motivations, the usefulness, and the return of investment of adopting robots for supporting testing activities are not clear. Objective: We aim at surveying the literature on the use of robotics for supporting mobile testing with a focus on the motivations, the types of tests that are automated, and the reported effectiveness/efficiency. Method: We conduct a systematic mapping study on robotic testing of mobile devices (hereafter, referred as robotic mobile testing). We searched primary studies published since 2000 by querying five digital libraries, and by performing backward and forward snowballing cycles. Results: We started with a set of 1353 papers and after applying our study protocol, we selected a final set of 20 primary studies. We provide both a quantitative analysis, and a qualitative evaluation of the motivations, types of tests automated and the effectiveness/efficiency reported by the selected studies. Conclusions: Based on the selected studies, allowing more realistic interactions is among the main motivations for adopting robotic mobile testing. The tests automated with the support of robots are usually system-level tests targeting stress, interface, and performance testing. More empirical evidence is needed for supporting the claimed benefits. Most of the surveyed work do not compare the effectiveness and efficiency of the proposed robotics-based approach against traditional automation techniques. We discuss the implications of our findings for researchers and practitioners, and outline a research agenda.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123208227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}