Pub Date : 2024-06-24DOI: 10.1007/s00607-024-01310-0
Antonios Makris, Evangelos Psomakelis, Ioannis Korontanis, Theodoros Theodoropoulos, Ioannis Kontopoulos, Maria Pateraki, Christos Diou, Konstantinos Tserpes
In recent years, containerization is becoming more and more popular for deploying applications and services and it has significantly contributed to the expansion of edge computing. The demand for effective and scalable container image management, however, increases as the number of containers deployed grows. One solution is to use a localized Docker registry at the edge, where the images are stored closer to the deployment site. This approach can considerably reduce the latency and bandwidth required to download images from a central registry. In addition, it acts as a proactive caching mechanism by optimizing the download delays and the network traffic. In this paper, we introduce an edge-enabled storage framework that incorporates a localized Docker registry. This framework aims to streamline the storage and distribution of container images, providing improved control, scalability, and optimized capabilities for edge deployment. Four demanding XR applications are employed as use cases to experiment with the proposed solution.
{"title":"Edge-driven Docker registry: facilitating XR application deployment","authors":"Antonios Makris, Evangelos Psomakelis, Ioannis Korontanis, Theodoros Theodoropoulos, Ioannis Kontopoulos, Maria Pateraki, Christos Diou, Konstantinos Tserpes","doi":"10.1007/s00607-024-01310-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01310-0","url":null,"abstract":"<p>In recent years, containerization is becoming more and more popular for deploying applications and services and it has significantly contributed to the expansion of edge computing. The demand for effective and scalable container image management, however, increases as the number of containers deployed grows. One solution is to use a localized Docker registry at the edge, where the images are stored closer to the deployment site. This approach can considerably reduce the latency and bandwidth required to download images from a central registry. In addition, it acts as a proactive caching mechanism by optimizing the download delays and the network traffic. In this paper, we introduce an edge-enabled storage framework that incorporates a localized Docker registry. This framework aims to streamline the storage and distribution of container images, providing improved control, scalability, and optimized capabilities for edge deployment. Four demanding XR applications are employed as use cases to experiment with the proposed solution.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"53 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Body Area Networks (WBANs) are wireless sensor networks that monitor the physiological and contextual data of the human body. Nodes in a WBAN communicate using short-range and low-power transmissions to minimize any impact on the human body’s health and mobility. These transmissions thus become subject to failures caused by radiofrequency interference or body mobility. Additionally, WBAN applications typically have timing constraints and carry dynamic traffic, which can change depending on the physiological conditions of the human body. Several approaches for the Medium Access Control (MAC) sublayer have been proposed to improve the reliability and efficiency of the WBANs. This paper proposes and uses a systematic literature review (SLR) method to identify, classify, and statistically analyze the published works with MAC approaches for WBAN efficiency and reliability under dynamic network traffic, radiofrequency interference, and body mobility. In particular, we extend a traditional SLR method by adding a new step to select publications based on qualitative parameters. As a result, we identify the challenges and proposed solutions, highlight advantages and disadvantages, and suggest future works.
无线体感区域网络(WBAN)是一种监测人体生理和环境数据的无线传感器网络。WBAN 中的节点使用短距离和低功耗传输进行通信,以尽量减少对人体健康和移动性的影响。因此,这些传输会因射频干扰或人体移动性而出现故障。此外,WBAN 应用通常有时间限制,并携带动态流量,这些流量会随着人体生理状况的变化而变化。为了提高无线局域网的可靠性和效率,人们提出了几种介质访问控制(MAC)子层的方法。本文提出并使用系统文献综述(SLR)方法来识别、分类和统计分析已发表的有关在动态网络流量、射频干扰和人体移动性条件下提高无线局域网效率和可靠性的 MAC 方法的著作。特别是,我们扩展了传统的 SLR 方法,增加了一个新步骤,根据定性参数选择出版物。因此,我们确定了面临的挑战和建议的解决方案,强调了优缺点,并对未来的工作提出了建议。
{"title":"MAC approaches to communication efficiency and reliability under dynamic network traffic in wireless body area networks: a review","authors":"Jorge Herculano, Willians Pereira, Marcelo Guimarães, Reinaldo Cotrim, Alirio de Sá, Flávio Assis, Raimundo Macêdo, Sérgio Gorender","doi":"10.1007/s00607-024-01307-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01307-9","url":null,"abstract":"<p>Wireless Body Area Networks (WBANs) are wireless sensor networks that monitor the physiological and contextual data of the human body. Nodes in a WBAN communicate using short-range and low-power transmissions to minimize any impact on the human body’s health and mobility. These transmissions thus become subject to failures caused by radiofrequency interference or body mobility. Additionally, WBAN applications typically have timing constraints and carry dynamic traffic, which can change depending on the physiological conditions of the human body. Several approaches for the Medium Access Control (MAC) sublayer have been proposed to improve the reliability and efficiency of the WBANs. This paper proposes and uses a systematic literature review (SLR) method to identify, classify, and statistically analyze the published works with MAC approaches for WBAN efficiency and reliability under dynamic network traffic, radiofrequency interference, and body mobility. In particular, we extend a traditional SLR method by adding a new step to select publications based on qualitative parameters. As a result, we identify the challenges and proposed solutions, highlight advantages and disadvantages, and suggest future works.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"1 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1007/s00607-024-01305-x
Iure Fé, Tuan Anh Nguyen, Mario Di Mauro, Fabio Postiglione, Alex Ramos, André Soares, Eunmi Choi, Dugki Min, Jae Woo Lee, Francisco Airton Silva
Computer system resilience refers to the ability of a computer system to continue functioning even in the face of unexpected events or disruptions. These disruptions can be caused by a variety of factors, such as hardware failures, software glitches, cyber attacks, or even natural disasters. Modern computational environments need applications that can recover quickly from major disruptions while also being environmentally sustainable. Balancing system resilience with energy efficiency is challenging, as efforts to improve one can harm the other. This paper presents a method to enhance disaster survivability in microservice architectures, particularly those using Kubernetes in cloud-based environments, focusing on optimizing electrical energy use. Aiming to save energy, our work adopt the consolidation strategy that means grouping multiple microservices on a single host. Our aproach uses a widely adopted analytical model, the Generalized Stochastic Petri Net (GSPN). GSPN are a powerful modeling technique that is widely used in various fields, including engineering, computer science, and operations research. One of the primary advantages of GSPN is its ability to model complex systems with a high degree of accuracy. Additionally, GSPN allows for the modeling of both logical and stochastic behavior, making it ideal for systems that involve a combination of both. Our GSPN models compute a number of metrics such as: recovery time, system availability, reliability, Mean Time to Failure, and the configuration of cloud-based microservices. We compared our approach against others focusing on survivability or efficiency. Our approach aligns with Recovery Time Objectives during sudden disasters and offers the fastest recovery, requiring 9% less warning time to fully recover in cases of disaster with alert when compared to strategies with similar electrical consumption. It also saves about 27% energy compared to low consolidation strategies and 5% against high consolidation under static conditions.
{"title":"Energy-aware dynamic response and efficient consolidation strategies for disaster survivability of cloud microservices architecture","authors":"Iure Fé, Tuan Anh Nguyen, Mario Di Mauro, Fabio Postiglione, Alex Ramos, André Soares, Eunmi Choi, Dugki Min, Jae Woo Lee, Francisco Airton Silva","doi":"10.1007/s00607-024-01305-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01305-x","url":null,"abstract":"<p>Computer system resilience refers to the ability of a computer system to continue functioning even in the face of unexpected events or disruptions. These disruptions can be caused by a variety of factors, such as hardware failures, software glitches, cyber attacks, or even natural disasters. Modern computational environments need applications that can recover quickly from major disruptions while also being environmentally sustainable. Balancing system resilience with energy efficiency is challenging, as efforts to improve one can harm the other. This paper presents a method to enhance disaster survivability in microservice architectures, particularly those using Kubernetes in cloud-based environments, focusing on optimizing electrical energy use. Aiming to save energy, our work adopt the consolidation strategy that means grouping multiple microservices on a single host. Our aproach uses a widely adopted analytical model, the Generalized Stochastic Petri Net (GSPN). GSPN are a powerful modeling technique that is widely used in various fields, including engineering, computer science, and operations research. One of the primary advantages of GSPN is its ability to model complex systems with a high degree of accuracy. Additionally, GSPN allows for the modeling of both logical and stochastic behavior, making it ideal for systems that involve a combination of both. Our GSPN models compute a number of metrics such as: recovery time, system availability, reliability, Mean Time to Failure, and the configuration of cloud-based microservices. We compared our approach against others focusing on survivability or efficiency. Our approach aligns with Recovery Time Objectives during sudden disasters and offers the fastest recovery, requiring 9% less warning time to fully recover in cases of disaster with alert when compared to strategies with similar electrical consumption. It also saves about 27% energy compared to low consolidation strategies and 5% against high consolidation under static conditions.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"9 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-08DOI: 10.1007/s00607-024-01303-z
Francisco Plana, Andrés Abeliuk, Jorge Pérez
We present a simple and quick method to approximate network centrality indexes. Our approach, called QuickCent, is inspired by so-called fast and frugal heuristics, which are heuristics initially proposed to model some human decision and inference processes. The centrality index that we estimate is the harmonic centrality, which is a measure based on shortest-path distances, so infeasible to compute on large networks. We compare QuickCent with known machine learning algorithms on synthetic network datasets, and some empirical networks. Our experiments show that QuickCent can make estimates that are competitive in accuracy with the best alternative methods tested, either on synthetic scale-free networks or empirical networks. QuickCent has the feature of achieving low error variance estimates, even with a small training set. Moreover, QuickCent is comparable in efficiency—accuracy and time cost—to more complex methods. We discuss and provide some insight into how QuickCent exploits the fact that in some networks, such as those generated by preferential attachment, local density measures such as the in-degree, can be a good proxy for the size of the network region to which a node has access, opening up the possibility of approximating expensive indices based on size such as the harmonic centrality. This same fact may explain some evidence we provide that QuickCent would have a superior performance on empirical information networks, such as citations or the internet. Our initial results show that simple heuristics are a promising line of research in the context of network measure estimations.
{"title":"Quickcent: a fast and frugal heuristic for harmonic centrality estimation on scale-free networks","authors":"Francisco Plana, Andrés Abeliuk, Jorge Pérez","doi":"10.1007/s00607-024-01303-z","DOIUrl":"https://doi.org/10.1007/s00607-024-01303-z","url":null,"abstract":"<p>We present a simple and quick method to approximate network centrality indexes. Our approach, called <i>QuickCent</i>, is inspired by so-called <i>fast and frugal</i> heuristics, which are heuristics initially proposed to model some human decision and inference processes. The centrality index that we estimate is the <i>harmonic</i> centrality, which is a measure based on shortest-path distances, so infeasible to compute on large networks. We compare <i>QuickCent</i> with known machine learning algorithms on synthetic network datasets, and some empirical networks. Our experiments show that <i>QuickCent</i> can make estimates that are competitive in accuracy with the best alternative methods tested, either on synthetic scale-free networks or empirical networks. QuickCent has the feature of achieving low error variance estimates, even with a small training set. Moreover, <i>QuickCent</i> is comparable in efficiency—accuracy and time cost—to more complex methods. We discuss and provide some insight into how QuickCent exploits the fact that in some networks, such as those generated by preferential attachment, local density measures such as the in-degree, can be a good proxy for the size of the network region to which a node has access, opening up the possibility of approximating expensive indices based on size such as the harmonic centrality. This same fact may explain some evidence we provide that QuickCent would have a superior performance on empirical information networks, such as citations or the internet. Our initial results show that simple heuristics are a promising line of research in the context of network measure estimations.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"19 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1007/s00607-024-01296-9
Habib Un Nisa, Saif Ur Rehman Khan, Shahid Hussain, Wen-Li Wang
Software requirements play a vital role in ensuring a software product’s success. However, it remains a challenging task to implement all of the user requirements, especially in a resource-constrained development environment. To deal with this situation, a requirements prioritization (RP) process can help determine the sequence for the user requirements to be implemented. However, existing RP techniques are suffered from some major challenges such as lack of automation, excessive effort, and reliance on stakeholders’ involvement to initiate the process. This study intends to propose an automated requirements prioritization approach called association rule mining-oriented (ARMO) to address these challenges. The automation process of the ARMO approach incorporates activities to first pre-process the requirements description and extract features. The features are then examined and analyzed through the applied rule mining technique to prioritize the requirements automatically and efficiently without the involvement of stakeholders. In this work, an evaluation model was further developed to assess the effectiveness of the proposed ARMO approach. To validate the efficacy of ARMO approach, a case study was conducted on real-world software projects grounded on the accuracy, precision, recall, and f1-score measures. The promising experimental results demonstrate the ability of the proposed approach to prioritize user requirements. The proposed approach can successfully prioritize user requirements automatically without requiring a significant amount of effort and stakeholders’ involvement to initiate the RP process.
软件需求对确保软件产品的成功起着至关重要的作用。然而,要实现所有用户需求仍是一项具有挑战性的任务,尤其是在资源有限的开发环境中。为了应对这种情况,需求优先级排序(RP)流程可以帮助确定用户需求的实施顺序。然而,现有的需求优先级排序(RP)技术面临着一些主要挑战,如缺乏自动化、工作量过大以及依赖利益相关者的参与来启动流程等。本研究打算提出一种称为面向关联规则挖掘(ARMO)的自动化需求优先级排序方法来应对这些挑战。ARMO 方法的自动化流程包括首先预处理需求描述和提取特征的活动。然后,通过应用规则挖掘技术对这些特征进行检查和分析,从而在没有利益相关者参与的情况下自动、高效地确定需求的优先级。在这项工作中,进一步开发了一个评估模型,以评估所提出的 ARMO 方法的有效性。为了验证 ARMO 方法的有效性,基于准确度、精确度、召回率和 f1 分数等指标,对真实世界的软件项目进行了案例研究。令人鼓舞的实验结果证明了所提出的方法有能力对用户需求进行优先排序。建议的方法可以成功地自动排列用户需求的优先级,而不需要大量的努力和利益相关者的参与来启动 RP 流程。
{"title":"An association rule mining-oriented approach for prioritizing functional requirements","authors":"Habib Un Nisa, Saif Ur Rehman Khan, Shahid Hussain, Wen-Li Wang","doi":"10.1007/s00607-024-01296-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01296-9","url":null,"abstract":"<p>Software requirements play a vital role in ensuring a software product’s success. However, it remains a challenging task to implement all of the user requirements, especially in a resource-constrained development environment. To deal with this situation, a requirements prioritization (RP) process can help determine the sequence for the user requirements to be implemented. However, existing RP techniques are suffered from some major challenges such as lack of automation, excessive effort, and reliance on stakeholders’ involvement to initiate the process. This study intends to propose an automated requirements prioritization approach called association rule mining-oriented (ARMO) to address these challenges. The automation process of the ARMO approach incorporates activities to first pre-process the requirements description and extract features. The features are then examined and analyzed through the applied rule mining technique to prioritize the requirements automatically and efficiently without the involvement of stakeholders. In this work, an evaluation model was further developed to assess the effectiveness of the proposed ARMO approach. To validate the efficacy of ARMO approach, a case study was conducted on real-world software projects grounded on the accuracy, precision, recall, and f1-score measures. The promising experimental results demonstrate the ability of the proposed approach to prioritize user requirements. The proposed approach can successfully prioritize user requirements automatically without requiring a significant amount of effort and stakeholders’ involvement to initiate the RP process.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"433 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s00607-024-01294-x
Mouna Hadj-Kacem, Nadia Bouassida
Code smell identification is crucial in software maintenance. The existing literature mostly focuses on single code smell identification. However, in practice, a software artefact typically exhibits multiple code smells simultaneously where their diffuseness has been assessed, suggesting that 59% of smelly classes are affected by more than one smell. So to meet this complexity found in real-world projects, we propose a multi-label learning-based approach to identify eight code smells at the class-level, i.e. the most sever software artefacts that need to be prioritized in the refactoring process. In our experiments, we have used 12 algorithms from different multi-label learning methods across 30 open-source Java projects, where significant findings have been presented. We have explored co-occurrences between class code smells and examined the impact of correlations on prediction results. Additionally, we assess multi-label learning methods to compare data adaptation versus algorithm adaptation. Our findings highlight the effectiveness of the Ensemble of Classifier Chains and Binary Relevance in achieving high-performance results.
{"title":"Multi-label learning for identifying co-occurring class code smells","authors":"Mouna Hadj-Kacem, Nadia Bouassida","doi":"10.1007/s00607-024-01294-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01294-x","url":null,"abstract":"<p>Code smell identification is crucial in software maintenance. The existing literature mostly focuses on single code smell identification. However, in practice, a software artefact typically exhibits multiple code smells simultaneously where their diffuseness has been assessed, suggesting that 59% of smelly classes are affected by more than one smell. So to meet this complexity found in real-world projects, we propose a multi-label learning-based approach to identify eight code smells at the class-level, i.e. the most sever software artefacts that need to be prioritized in the refactoring process. In our experiments, we have used 12 algorithms from different multi-label learning methods across 30 open-source Java projects, where significant findings have been presented. We have explored co-occurrences between class code smells and examined the impact of correlations on prediction results. Additionally, we assess multi-label learning methods to compare data adaptation versus algorithm adaptation. Our findings highlight the effectiveness of the Ensemble of Classifier Chains and Binary Relevance in achieving high-performance results.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"23 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s00607-024-01295-w
G. Sri vidhya, R. Nagarajan
The advancement of technology allows for easy adaptability with IoT devices. Internet of Things (IoT) devices can interact without human intervention, which leads to the creation of smart cities. Nevertheless, security concerns persist within IoT networks. To address this, Software Defined Networking (SDN) has been introduced as a centrally controlled network that can solve security issues in IoT devices. Although there is a security concern with integrating SDN and IoT, it specifically targets Distributed Denial of Service (DDoS) attacks. These attacks focus on the network controller since it is centrally controlled. Real-time, high-performance, and precise solutions are necessary to tackle this issue effectively. In recent years, there has been a growing interest in using intelligent deep learning techniques in Network Intrusion Detection Systems (NIDS) through a Software-Defined IoT network (SDN-IoT). The concept of a Wireless Network Intrusion Detection System (WNIDS) aims to create an SDN controller that efficiently monitors and manages smart IoT devices. The proposed WNIDS method analyzes the CSE-CIC-IDS2018 and SDN-IoT datasets to detect and categorize intrusions or attacks in the SDN-IoT network. Implementing a deep learning method called Bidirectional LSTM (BiLSTM)--based WNIDS model effectively detects intrusions in the SDN-IoT network. This model has achieved impressive accuracy rates of 99.97% and 99.96% for binary and multi-class classification using the CSE-CIC-IDS2018 dataset. Similarly, with the SDN-IoT dataset, the model has achieved 95.13% accuracy for binary classification and 92.90% accuracy for multi-class classification, showing superior performance in both datasets.
{"title":"A novel bidirectional LSTM model for network intrusion detection in SDN-IoT network","authors":"G. Sri vidhya, R. Nagarajan","doi":"10.1007/s00607-024-01295-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01295-w","url":null,"abstract":"<p>The advancement of technology allows for easy adaptability with IoT devices. Internet of Things (IoT) devices can interact without human intervention, which leads to the creation of smart cities. Nevertheless, security concerns persist within IoT networks. To address this, Software Defined Networking (SDN) has been introduced as a centrally controlled network that can solve security issues in IoT devices. Although there is a security concern with integrating SDN and IoT, it specifically targets Distributed Denial of Service (DDoS) attacks. These attacks focus on the network controller since it is centrally controlled. Real-time, high-performance, and precise solutions are necessary to tackle this issue effectively. In recent years, there has been a growing interest in using intelligent deep learning techniques in Network Intrusion Detection Systems (NIDS) through a Software-Defined IoT network (SDN-IoT). The concept of a Wireless Network Intrusion Detection System (WNIDS) aims to create an SDN controller that efficiently monitors and manages smart IoT devices. The proposed WNIDS method analyzes the CSE-CIC-IDS2018 and SDN-IoT datasets to detect and categorize intrusions or attacks in the SDN-IoT network. Implementing a deep learning method called Bidirectional LSTM (BiLSTM)--based WNIDS model effectively detects intrusions in the SDN-IoT network. This model has achieved impressive accuracy rates of 99.97% and 99.96% for binary and multi-class classification using the CSE-CIC-IDS2018 dataset. Similarly, with the SDN-IoT dataset, the model has achieved 95.13% accuracy for binary classification and 92.90% accuracy for multi-class classification, showing superior performance in both datasets.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"7 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s00607-024-01297-8
Dimitrios Papathanasiou, Kostas Kolomvatsos
Context-aware data management becomes the focus of several research efforts, which can be placed at the intersection between the Internet of Things (IoT) and Edge Computing (EC). Huge volumes of data captured by IoT devices are processed in EC environments. Even if edge nodes undertake the responsibility of data management tasks, they are characterized by limited storage and computational resources compared to Cloud. Apparently, this mobilises the introduction of intelligent data selection methods capable of deciding which of the collected data should be kept locally based on end users/applications requests. In this paper, we devise a mechanism where edge nodes learn their own data selection filters, and decide the distributed allocation of newly collected data to their peers and/or Cloud once these data are not conformed with the local data filters. Our mechanism intents to postpone final decisions on data transfer to Cloud (e.g., data centers) to pervasively keep relevant data as close and as long to end users/applications as possible. The proposed mechanism derives a data-selection map across edge nodes by learning specific data sub-spaces, which facilitate the placement of processing tasks (e.g., analytics queries). This is very critical when we target to support near real time decision making and would like to minimize all parts of the tasks allocation procedure. We evaluate and compare our approach against baselines and schemes found in the literature showcasing its applicability in pervasive edge computing environments.
{"title":"Data management and selectivity in collaborative pervasive edge computing","authors":"Dimitrios Papathanasiou, Kostas Kolomvatsos","doi":"10.1007/s00607-024-01297-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01297-8","url":null,"abstract":"<p>Context-aware data management becomes the focus of several research efforts, which can be placed at the intersection between the Internet of Things (IoT) and Edge Computing (EC). Huge volumes of data captured by IoT devices are processed in EC environments. Even if edge nodes undertake the responsibility of data management tasks, they are characterized by limited storage and computational resources compared to Cloud. Apparently, this mobilises the introduction of intelligent data selection methods capable of deciding which of the collected data should be kept locally based on end users/applications requests. In this paper, we devise a mechanism where edge nodes learn their own data selection filters, and decide the distributed allocation of newly collected data to their peers and/or Cloud once these data are not conformed with the local data filters. Our mechanism intents to postpone final decisions on data transfer to Cloud (e.g., data centers) to pervasively keep relevant data as close and as long to end users/applications as possible. The proposed mechanism derives a data-selection map across edge nodes by learning specific data sub-spaces, which facilitate the placement of processing tasks (e.g., analytics queries). This is very critical when we target to support near real time decision making and would like to minimize all parts of the tasks allocation procedure. We evaluate and compare our approach against baselines and schemes found in the literature showcasing its applicability in pervasive edge computing environments.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"31 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-08DOI: 10.1007/s00607-024-01257-2
Shuyan Cheng, Peng Li, Ruchuan Wang, He Xu
In Differentially Private Federated Learning (DPFL), gradient clipping and random noise addition disproportionately affect statistically heterogeneous data. As a consequence, DPFL has a disparate impact: the accuracy of models trained with DPFL tends to decrease more on these data. If the accuracy of the original model decreases on heterogeneous data, DPFL may degrade the accuracy performance more. In this work, we study the utility loss inequality due to differential privacy and compare the convergence of the private and non-private models. Specifically, we analyze the gradient differences caused by statistically heterogeneous data and explain how statistical heterogeneity relates to the effect of privacy on model convergence. In addition, we propose an improved DPFL algorithm, called R-DPFL, to achieve differential privacy at the same cost but with good utility. R-DPFL adjusts the gradient clipping value and the number of selected users at beginning according to the degree of statistical heterogeneity of the data, and weakens the direct proportional relationship between the differential privacy and the gradient difference, thereby reducing the impact of differential privacy on the model trained on heterogeneous data. Our experimental evaluation shows the effectiveness of our elimination algorithm in achieving the same cost of differential privacy with satisfactory utility. Our code is publicly available at https://github.com/chengshuyan/R-DPFL.
{"title":"Differentially private federated learning with non-IID data","authors":"Shuyan Cheng, Peng Li, Ruchuan Wang, He Xu","doi":"10.1007/s00607-024-01257-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01257-2","url":null,"abstract":"<p>In Differentially Private Federated Learning (DPFL), gradient clipping and random noise addition disproportionately affect statistically heterogeneous data. As a consequence, DPFL has a disparate impact: the accuracy of models trained with DPFL tends to decrease more on these data. If the accuracy of the original model decreases on heterogeneous data, DPFL may degrade the accuracy performance more. In this work, we study the utility loss inequality due to differential privacy and compare the convergence of the private and non-private models. Specifically, we analyze the gradient differences caused by statistically heterogeneous data and explain how statistical heterogeneity relates to the effect of privacy on model convergence. In addition, we propose an improved DPFL algorithm, called R-DPFL, to achieve differential privacy at the same cost but with good utility. R-DPFL adjusts the gradient clipping value and the number of selected users at beginning according to the degree of statistical heterogeneity of the data, and weakens the direct proportional relationship between the differential privacy and the gradient difference, thereby reducing the impact of differential privacy on the model trained on heterogeneous data. Our experimental evaluation shows the effectiveness of our elimination algorithm in achieving the same cost of differential privacy with satisfactory utility. Our code is publicly available at https://github.com/chengshuyan/R-DPFL.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"20 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140942541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The smart healthcare system advancements have introduced the Internet of Things, enabling technologies to improve the quality of medical services. The main idea of these healthcare systems is to provide data security, interaction between entities, efficient data transfer, and sustainability. However, privacy concerning patient information is a fundamental problem in smart healthcare systems. Many authentications and critical management protocols exist in the literature for healthcare systems, but ensuring security still needs to be improved. Even if security is achieved, it still requires fast communication and computations. In this paper, we have introduced a new secure privacy-enhanced fast authentication key management scheme that effectively applies to lightweight resource-constrained devices in healthcare systems to overcome the issue. The proposed framework is applicable for quick authentication, efficient key management between the entities, and minimising computation and communication overheads. We verified our proposed framework with formal and informal verification using BAN logic, Scyther simulation, and the Drozer tool. The simulation and tool verification shows that the proposed system is free from well-known attacks, reducing communication and computation costs compared to the existing healthcare systems.
智能医疗系统的进步引入了物联网,使技术能够提高医疗服务的质量。这些医疗系统的主要理念是提供数据安全、实体间互动、高效数据传输和可持续性。然而,患者信息隐私是智能医疗系统的一个基本问题。文献中有许多关于医疗保健系统的认证和关键管理协议,但确保安全性仍有待改进。即使实现了安全性,仍需要快速通信和计算。在本文中,我们介绍了一种新的安全隐私增强型快速认证密钥管理方案,它能有效地应用于医疗保健系统中的轻量级资源受限设备,以克服这一问题。所提出的框架适用于快速身份验证、实体间的高效密钥管理以及计算和通信开销最小化。我们使用 BAN 逻辑、Scyther 仿真和 Drozer 工具,通过正式和非正式验证来验证我们提出的框架。仿真和工具验证结果表明,与现有的医疗保健系统相比,所提出的系统不会受到众所周知的攻击,还能降低通信和计算成本。
{"title":"Secure privacy-enhanced fast authentication and key management for IoMT-enabled smart healthcare systems","authors":"Sriramulu Bojjagani, Denslin Brabin, Kalai Kumar, Neeraj Kumar Sharma, Umamaheswararao Batta","doi":"10.1007/s00607-024-01291-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01291-0","url":null,"abstract":"<p>The smart healthcare system advancements have introduced the Internet of Things, enabling technologies to improve the quality of medical services. The main idea of these healthcare systems is to provide data security, interaction between entities, efficient data transfer, and sustainability. However, privacy concerning patient information is a fundamental problem in smart healthcare systems. Many authentications and critical management protocols exist in the literature for healthcare systems, but ensuring security still needs to be improved. Even if security is achieved, it still requires fast communication and computations. In this paper, we have introduced a new secure privacy-enhanced fast authentication key management scheme that effectively applies to lightweight resource-constrained devices in healthcare systems to overcome the issue. The proposed framework is applicable for quick authentication, efficient key management between the entities, and minimising computation and communication overheads. We verified our proposed framework with formal and informal verification using BAN logic, Scyther simulation, and the Drozer tool. The simulation and tool verification shows that the proposed system is free from well-known attacks, reducing communication and computation costs compared to the existing healthcare systems.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"352 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}