Pub Date : 2024-07-01DOI: 10.1007/s00607-024-01309-7
Selcuk Aslan
The autonomous task success of an unmanned aerial vehiclel (UAV) or its military specialization called the unmanned combat aerial vehicle (UCAV) has a direct relationship with the planned path. However, planning a path for a UAV or UCAV system requires solving a challenging problem optimally by considering the different objectives about the enemy threats protecting the battlefield, fuel consumption or battery usage and kinematic constraints on the turning maneuvers. Because of the increasing demands to the UAV systems and game-changing roles played by them, developing new and versatile path planning algorithms become more critical and urgent. In this study, a greedy algorithm named as the Back-and-Forth (BaF) was designed and introduced for solving the path planning problem. The BaF algorithm gets its name from the main strategy where a heuristic approach is responsible to generate two predecessor paths, one of which is calculated from the start point to the target point, while the other is calculated in the reverse direction, and combines the generated paths for utilizing their advantageous line segments when obtaining more safe, short and maneuverable path candidates. The performance of the BaF was investigated over three battlefield scenarios and twelve test cases belonging to them. Moreover, the BaF was integrated into the workflow of a well-known meta-heuristic, artificial bee colony (ABC) algorithm, and detailed experiments were also carried out for evaluating the possible contribution of the BaF on the path planning capabilities of another technique. The results of the experiments showed that the BaF algorithm is able to plan at least promising or generally better paths with the exact consistency than other tested meta-heuristic techniques and runs nine or more times faster as validated through the comparison between the BaF and ABC algorithms. The results of the experiments further proved that the integration of the BaF boosts the performance of the ABC and helps it to outperform all of fifteen competitors for nine of twelve test cases.
{"title":"Back-and-Forth (BaF): a new greedy algorithm for geometric path planning of unmanned aerial vehicles","authors":"Selcuk Aslan","doi":"10.1007/s00607-024-01309-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01309-7","url":null,"abstract":"<p>The autonomous task success of an unmanned aerial vehiclel (UAV) or its military specialization called the unmanned combat aerial vehicle (UCAV) has a direct relationship with the planned path. However, planning a path for a UAV or UCAV system requires solving a challenging problem optimally by considering the different objectives about the enemy threats protecting the battlefield, fuel consumption or battery usage and kinematic constraints on the turning maneuvers. Because of the increasing demands to the UAV systems and game-changing roles played by them, developing new and versatile path planning algorithms become more critical and urgent. In this study, a greedy algorithm named as the Back-and-Forth (BaF) was designed and introduced for solving the path planning problem. The BaF algorithm gets its name from the main strategy where a heuristic approach is responsible to generate two predecessor paths, one of which is calculated from the start point to the target point, while the other is calculated in the reverse direction, and combines the generated paths for utilizing their advantageous line segments when obtaining more safe, short and maneuverable path candidates. The performance of the BaF was investigated over three battlefield scenarios and twelve test cases belonging to them. Moreover, the BaF was integrated into the workflow of a well-known meta-heuristic, artificial bee colony (ABC) algorithm, and detailed experiments were also carried out for evaluating the possible contribution of the BaF on the path planning capabilities of another technique. The results of the experiments showed that the BaF algorithm is able to plan at least promising or generally better paths with the exact consistency than other tested meta-heuristic techniques and runs nine or more times faster as validated through the comparison between the BaF and ABC algorithms. The results of the experiments further proved that the integration of the BaF boosts the performance of the ABC and helps it to outperform all of fifteen competitors for nine of twelve test cases.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s00607-024-01300-2
Fahim Niaz, Jian Zhang, Muhammad Khalid, Kashif Naseer Qureshi, Yang Zheng, Muhammad Younas, Naveed Imran
In recent years, the significance of millimeter wave sensors has achieved a paramount role, especially in the non-invasive and ubiquitous analysis of various materials and objects. This paper introduces a novel IoT-based fake currency detection using millimeter wave (mmWave) that leverages machine and deep learning algorithms for the detection of fake and genuine currency based on their distinct sensor reflections. To gather these reflections or signatures from different currency notes, we utilize multiple receiving (RX) antennae of the radar sensor module. Our proposed framework encompasses three different approaches for genuine and fake currency detection, Convolutional Neural Network (CNN), k-nearest Neighbor (k-NN), and Transfer Learning Technique (TLT). After extensive experiments, the proposed framework exhibits impressive accuracy and obtained classification accuracy of 96%, 94%, and 98% for CNN, k-NN, and TLT in distinguishing 10 different currency notes using radar signals.
{"title":"AI enabled: a novel IoT-based fake currency detection using millimeter wave (mmWave) sensor","authors":"Fahim Niaz, Jian Zhang, Muhammad Khalid, Kashif Naseer Qureshi, Yang Zheng, Muhammad Younas, Naveed Imran","doi":"10.1007/s00607-024-01300-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01300-2","url":null,"abstract":"<p>In recent years, the significance of millimeter wave sensors has achieved a paramount role, especially in the non-invasive and ubiquitous analysis of various materials and objects. This paper introduces a novel IoT-based fake currency detection using millimeter wave (mmWave) that leverages machine and deep learning algorithms for the detection of fake and genuine currency based on their distinct sensor reflections. To gather these reflections or signatures from different currency notes, we utilize multiple receiving (<i>RX</i>) antennae of the radar sensor module. Our proposed framework encompasses three different approaches for genuine and fake currency detection, Convolutional Neural Network (CNN), k-nearest Neighbor (k-NN), and Transfer Learning Technique (TLT). After extensive experiments, the proposed framework exhibits impressive accuracy and obtained classification accuracy of 96%, 94%, and 98% for CNN, k-NN, and TLT in distinguishing 10 different currency notes using radar signals.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s00607-024-01310-0
Antonios Makris, Evangelos Psomakelis, Ioannis Korontanis, Theodoros Theodoropoulos, Ioannis Kontopoulos, Maria Pateraki, Christos Diou, Konstantinos Tserpes
In recent years, containerization is becoming more and more popular for deploying applications and services and it has significantly contributed to the expansion of edge computing. The demand for effective and scalable container image management, however, increases as the number of containers deployed grows. One solution is to use a localized Docker registry at the edge, where the images are stored closer to the deployment site. This approach can considerably reduce the latency and bandwidth required to download images from a central registry. In addition, it acts as a proactive caching mechanism by optimizing the download delays and the network traffic. In this paper, we introduce an edge-enabled storage framework that incorporates a localized Docker registry. This framework aims to streamline the storage and distribution of container images, providing improved control, scalability, and optimized capabilities for edge deployment. Four demanding XR applications are employed as use cases to experiment with the proposed solution.
{"title":"Edge-driven Docker registry: facilitating XR application deployment","authors":"Antonios Makris, Evangelos Psomakelis, Ioannis Korontanis, Theodoros Theodoropoulos, Ioannis Kontopoulos, Maria Pateraki, Christos Diou, Konstantinos Tserpes","doi":"10.1007/s00607-024-01310-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01310-0","url":null,"abstract":"<p>In recent years, containerization is becoming more and more popular for deploying applications and services and it has significantly contributed to the expansion of edge computing. The demand for effective and scalable container image management, however, increases as the number of containers deployed grows. One solution is to use a localized Docker registry at the edge, where the images are stored closer to the deployment site. This approach can considerably reduce the latency and bandwidth required to download images from a central registry. In addition, it acts as a proactive caching mechanism by optimizing the download delays and the network traffic. In this paper, we introduce an edge-enabled storage framework that incorporates a localized Docker registry. This framework aims to streamline the storage and distribution of container images, providing improved control, scalability, and optimized capabilities for edge deployment. Four demanding XR applications are employed as use cases to experiment with the proposed solution.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Body Area Networks (WBANs) are wireless sensor networks that monitor the physiological and contextual data of the human body. Nodes in a WBAN communicate using short-range and low-power transmissions to minimize any impact on the human body’s health and mobility. These transmissions thus become subject to failures caused by radiofrequency interference or body mobility. Additionally, WBAN applications typically have timing constraints and carry dynamic traffic, which can change depending on the physiological conditions of the human body. Several approaches for the Medium Access Control (MAC) sublayer have been proposed to improve the reliability and efficiency of the WBANs. This paper proposes and uses a systematic literature review (SLR) method to identify, classify, and statistically analyze the published works with MAC approaches for WBAN efficiency and reliability under dynamic network traffic, radiofrequency interference, and body mobility. In particular, we extend a traditional SLR method by adding a new step to select publications based on qualitative parameters. As a result, we identify the challenges and proposed solutions, highlight advantages and disadvantages, and suggest future works.
无线体感区域网络(WBAN)是一种监测人体生理和环境数据的无线传感器网络。WBAN 中的节点使用短距离和低功耗传输进行通信,以尽量减少对人体健康和移动性的影响。因此,这些传输会因射频干扰或人体移动性而出现故障。此外,WBAN 应用通常有时间限制,并携带动态流量,这些流量会随着人体生理状况的变化而变化。为了提高无线局域网的可靠性和效率,人们提出了几种介质访问控制(MAC)子层的方法。本文提出并使用系统文献综述(SLR)方法来识别、分类和统计分析已发表的有关在动态网络流量、射频干扰和人体移动性条件下提高无线局域网效率和可靠性的 MAC 方法的著作。特别是,我们扩展了传统的 SLR 方法,增加了一个新步骤,根据定性参数选择出版物。因此,我们确定了面临的挑战和建议的解决方案,强调了优缺点,并对未来的工作提出了建议。
{"title":"MAC approaches to communication efficiency and reliability under dynamic network traffic in wireless body area networks: a review","authors":"Jorge Herculano, Willians Pereira, Marcelo Guimarães, Reinaldo Cotrim, Alirio de Sá, Flávio Assis, Raimundo Macêdo, Sérgio Gorender","doi":"10.1007/s00607-024-01307-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01307-9","url":null,"abstract":"<p>Wireless Body Area Networks (WBANs) are wireless sensor networks that monitor the physiological and contextual data of the human body. Nodes in a WBAN communicate using short-range and low-power transmissions to minimize any impact on the human body’s health and mobility. These transmissions thus become subject to failures caused by radiofrequency interference or body mobility. Additionally, WBAN applications typically have timing constraints and carry dynamic traffic, which can change depending on the physiological conditions of the human body. Several approaches for the Medium Access Control (MAC) sublayer have been proposed to improve the reliability and efficiency of the WBANs. This paper proposes and uses a systematic literature review (SLR) method to identify, classify, and statistically analyze the published works with MAC approaches for WBAN efficiency and reliability under dynamic network traffic, radiofrequency interference, and body mobility. In particular, we extend a traditional SLR method by adding a new step to select publications based on qualitative parameters. As a result, we identify the challenges and proposed solutions, highlight advantages and disadvantages, and suggest future works.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1007/s00607-024-01305-x
Iure Fé, Tuan Anh Nguyen, Mario Di Mauro, Fabio Postiglione, Alex Ramos, André Soares, Eunmi Choi, Dugki Min, Jae Woo Lee, Francisco Airton Silva
Computer system resilience refers to the ability of a computer system to continue functioning even in the face of unexpected events or disruptions. These disruptions can be caused by a variety of factors, such as hardware failures, software glitches, cyber attacks, or even natural disasters. Modern computational environments need applications that can recover quickly from major disruptions while also being environmentally sustainable. Balancing system resilience with energy efficiency is challenging, as efforts to improve one can harm the other. This paper presents a method to enhance disaster survivability in microservice architectures, particularly those using Kubernetes in cloud-based environments, focusing on optimizing electrical energy use. Aiming to save energy, our work adopt the consolidation strategy that means grouping multiple microservices on a single host. Our aproach uses a widely adopted analytical model, the Generalized Stochastic Petri Net (GSPN). GSPN are a powerful modeling technique that is widely used in various fields, including engineering, computer science, and operations research. One of the primary advantages of GSPN is its ability to model complex systems with a high degree of accuracy. Additionally, GSPN allows for the modeling of both logical and stochastic behavior, making it ideal for systems that involve a combination of both. Our GSPN models compute a number of metrics such as: recovery time, system availability, reliability, Mean Time to Failure, and the configuration of cloud-based microservices. We compared our approach against others focusing on survivability or efficiency. Our approach aligns with Recovery Time Objectives during sudden disasters and offers the fastest recovery, requiring 9% less warning time to fully recover in cases of disaster with alert when compared to strategies with similar electrical consumption. It also saves about 27% energy compared to low consolidation strategies and 5% against high consolidation under static conditions.
{"title":"Energy-aware dynamic response and efficient consolidation strategies for disaster survivability of cloud microservices architecture","authors":"Iure Fé, Tuan Anh Nguyen, Mario Di Mauro, Fabio Postiglione, Alex Ramos, André Soares, Eunmi Choi, Dugki Min, Jae Woo Lee, Francisco Airton Silva","doi":"10.1007/s00607-024-01305-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01305-x","url":null,"abstract":"<p>Computer system resilience refers to the ability of a computer system to continue functioning even in the face of unexpected events or disruptions. These disruptions can be caused by a variety of factors, such as hardware failures, software glitches, cyber attacks, or even natural disasters. Modern computational environments need applications that can recover quickly from major disruptions while also being environmentally sustainable. Balancing system resilience with energy efficiency is challenging, as efforts to improve one can harm the other. This paper presents a method to enhance disaster survivability in microservice architectures, particularly those using Kubernetes in cloud-based environments, focusing on optimizing electrical energy use. Aiming to save energy, our work adopt the consolidation strategy that means grouping multiple microservices on a single host. Our aproach uses a widely adopted analytical model, the Generalized Stochastic Petri Net (GSPN). GSPN are a powerful modeling technique that is widely used in various fields, including engineering, computer science, and operations research. One of the primary advantages of GSPN is its ability to model complex systems with a high degree of accuracy. Additionally, GSPN allows for the modeling of both logical and stochastic behavior, making it ideal for systems that involve a combination of both. Our GSPN models compute a number of metrics such as: recovery time, system availability, reliability, Mean Time to Failure, and the configuration of cloud-based microservices. We compared our approach against others focusing on survivability or efficiency. Our approach aligns with Recovery Time Objectives during sudden disasters and offers the fastest recovery, requiring 9% less warning time to fully recover in cases of disaster with alert when compared to strategies with similar electrical consumption. It also saves about 27% energy compared to low consolidation strategies and 5% against high consolidation under static conditions.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-15DOI: 10.1007/s00607-024-01299-6
Hector A. de la Fuente-Anaya, H. Marin-Castro, Miguel Morales-Sandoval, Jose Juan Garcia-Hernandez
{"title":"Business process discovery as a service with event log privacy and access control over discovered models","authors":"Hector A. de la Fuente-Anaya, H. Marin-Castro, Miguel Morales-Sandoval, Jose Juan Garcia-Hernandez","doi":"10.1007/s00607-024-01299-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01299-6","url":null,"abstract":"","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141336149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1007/s00607-024-01301-1
Ragi Krishnan, Selvam Durairaj
{"title":"Reliability and performance of resource efficiency in dynamic optimization scheduling using multi-agent microservice cloud-fog on IoT applications","authors":"Ragi Krishnan, Selvam Durairaj","doi":"10.1007/s00607-024-01301-1","DOIUrl":"https://doi.org/10.1007/s00607-024-01301-1","url":null,"abstract":"","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141339482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-08DOI: 10.1007/s00607-024-01303-z
Francisco Plana, Andrés Abeliuk, Jorge Pérez
We present a simple and quick method to approximate network centrality indexes. Our approach, called QuickCent, is inspired by so-called fast and frugal heuristics, which are heuristics initially proposed to model some human decision and inference processes. The centrality index that we estimate is the harmonic centrality, which is a measure based on shortest-path distances, so infeasible to compute on large networks. We compare QuickCent with known machine learning algorithms on synthetic network datasets, and some empirical networks. Our experiments show that QuickCent can make estimates that are competitive in accuracy with the best alternative methods tested, either on synthetic scale-free networks or empirical networks. QuickCent has the feature of achieving low error variance estimates, even with a small training set. Moreover, QuickCent is comparable in efficiency—accuracy and time cost—to more complex methods. We discuss and provide some insight into how QuickCent exploits the fact that in some networks, such as those generated by preferential attachment, local density measures such as the in-degree, can be a good proxy for the size of the network region to which a node has access, opening up the possibility of approximating expensive indices based on size such as the harmonic centrality. This same fact may explain some evidence we provide that QuickCent would have a superior performance on empirical information networks, such as citations or the internet. Our initial results show that simple heuristics are a promising line of research in the context of network measure estimations.
{"title":"Quickcent: a fast and frugal heuristic for harmonic centrality estimation on scale-free networks","authors":"Francisco Plana, Andrés Abeliuk, Jorge Pérez","doi":"10.1007/s00607-024-01303-z","DOIUrl":"https://doi.org/10.1007/s00607-024-01303-z","url":null,"abstract":"<p>We present a simple and quick method to approximate network centrality indexes. Our approach, called <i>QuickCent</i>, is inspired by so-called <i>fast and frugal</i> heuristics, which are heuristics initially proposed to model some human decision and inference processes. The centrality index that we estimate is the <i>harmonic</i> centrality, which is a measure based on shortest-path distances, so infeasible to compute on large networks. We compare <i>QuickCent</i> with known machine learning algorithms on synthetic network datasets, and some empirical networks. Our experiments show that <i>QuickCent</i> can make estimates that are competitive in accuracy with the best alternative methods tested, either on synthetic scale-free networks or empirical networks. QuickCent has the feature of achieving low error variance estimates, even with a small training set. Moreover, <i>QuickCent</i> is comparable in efficiency—accuracy and time cost—to more complex methods. We discuss and provide some insight into how QuickCent exploits the fact that in some networks, such as those generated by preferential attachment, local density measures such as the in-degree, can be a good proxy for the size of the network region to which a node has access, opening up the possibility of approximating expensive indices based on size such as the harmonic centrality. This same fact may explain some evidence we provide that QuickCent would have a superior performance on empirical information networks, such as citations or the internet. Our initial results show that simple heuristics are a promising line of research in the context of network measure estimations.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1007/s00607-024-01296-9
Habib Un Nisa, Saif Ur Rehman Khan, Shahid Hussain, Wen-Li Wang
Software requirements play a vital role in ensuring a software product’s success. However, it remains a challenging task to implement all of the user requirements, especially in a resource-constrained development environment. To deal with this situation, a requirements prioritization (RP) process can help determine the sequence for the user requirements to be implemented. However, existing RP techniques are suffered from some major challenges such as lack of automation, excessive effort, and reliance on stakeholders’ involvement to initiate the process. This study intends to propose an automated requirements prioritization approach called association rule mining-oriented (ARMO) to address these challenges. The automation process of the ARMO approach incorporates activities to first pre-process the requirements description and extract features. The features are then examined and analyzed through the applied rule mining technique to prioritize the requirements automatically and efficiently without the involvement of stakeholders. In this work, an evaluation model was further developed to assess the effectiveness of the proposed ARMO approach. To validate the efficacy of ARMO approach, a case study was conducted on real-world software projects grounded on the accuracy, precision, recall, and f1-score measures. The promising experimental results demonstrate the ability of the proposed approach to prioritize user requirements. The proposed approach can successfully prioritize user requirements automatically without requiring a significant amount of effort and stakeholders’ involvement to initiate the RP process.
软件需求对确保软件产品的成功起着至关重要的作用。然而,要实现所有用户需求仍是一项具有挑战性的任务,尤其是在资源有限的开发环境中。为了应对这种情况,需求优先级排序(RP)流程可以帮助确定用户需求的实施顺序。然而,现有的需求优先级排序(RP)技术面临着一些主要挑战,如缺乏自动化、工作量过大以及依赖利益相关者的参与来启动流程等。本研究打算提出一种称为面向关联规则挖掘(ARMO)的自动化需求优先级排序方法来应对这些挑战。ARMO 方法的自动化流程包括首先预处理需求描述和提取特征的活动。然后,通过应用规则挖掘技术对这些特征进行检查和分析,从而在没有利益相关者参与的情况下自动、高效地确定需求的优先级。在这项工作中,进一步开发了一个评估模型,以评估所提出的 ARMO 方法的有效性。为了验证 ARMO 方法的有效性,基于准确度、精确度、召回率和 f1 分数等指标,对真实世界的软件项目进行了案例研究。令人鼓舞的实验结果证明了所提出的方法有能力对用户需求进行优先排序。建议的方法可以成功地自动排列用户需求的优先级,而不需要大量的努力和利益相关者的参与来启动 RP 流程。
{"title":"An association rule mining-oriented approach for prioritizing functional requirements","authors":"Habib Un Nisa, Saif Ur Rehman Khan, Shahid Hussain, Wen-Li Wang","doi":"10.1007/s00607-024-01296-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01296-9","url":null,"abstract":"<p>Software requirements play a vital role in ensuring a software product’s success. However, it remains a challenging task to implement all of the user requirements, especially in a resource-constrained development environment. To deal with this situation, a requirements prioritization (RP) process can help determine the sequence for the user requirements to be implemented. However, existing RP techniques are suffered from some major challenges such as lack of automation, excessive effort, and reliance on stakeholders’ involvement to initiate the process. This study intends to propose an automated requirements prioritization approach called association rule mining-oriented (ARMO) to address these challenges. The automation process of the ARMO approach incorporates activities to first pre-process the requirements description and extract features. The features are then examined and analyzed through the applied rule mining technique to prioritize the requirements automatically and efficiently without the involvement of stakeholders. In this work, an evaluation model was further developed to assess the effectiveness of the proposed ARMO approach. To validate the efficacy of ARMO approach, a case study was conducted on real-world software projects grounded on the accuracy, precision, recall, and f1-score measures. The promising experimental results demonstrate the ability of the proposed approach to prioritize user requirements. The proposed approach can successfully prioritize user requirements automatically without requiring a significant amount of effort and stakeholders’ involvement to initiate the RP process.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}