Pub Date : 2024-10-14DOI: 10.1016/j.jnca.2024.104040
Muhammad Sajjad Akbar , Zawar Hussain , Muhammad Ikram , Quan Z. Sheng , Subhas Chandra Mukhopadhyay
Fifth-generation (5G) wireless networks are likely to offer high data rates, increased reliability, and low delay for mobile, personal, and local area networks. Along with the rapid growth of smart wireless sensing and communication technologies, data traffic has increased significantly and existing 5G networks are not able to fully support future massive data traffic for services, storage, and processing. To meet the challenges that are ahead, both research communities and industry are exploring the sixth generation (6G) Terahertz-based wireless network that is expected to be offered to industrial users in just ten years. Gaining knowledge and understanding of the different challenges and facets of 6G is crucial in meeting the requirements of future communication and addressing evolving quality of service (QoS) demands. This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G. It covers disruptive and innovative, integration of 6G with advanced architectures and networks such as software-defined networks (SDN), network functions virtualization (NFV), Cloud/Fog computing, and Artificial Intelligence (AI) oriented technologies. The survey also addresses privacy and security concerns and provides potential futuristic use cases such as virtual reality, smart healthcare, and Industry 5.0. Furthermore, it identifies the current challenges and outlines future research directions to facilitate the deployment of 6G networks.
{"title":"On challenges of sixth-generation (6G) wireless networks: A comprehensive survey of requirements, applications, and security issues","authors":"Muhammad Sajjad Akbar , Zawar Hussain , Muhammad Ikram , Quan Z. Sheng , Subhas Chandra Mukhopadhyay","doi":"10.1016/j.jnca.2024.104040","DOIUrl":"10.1016/j.jnca.2024.104040","url":null,"abstract":"<div><div>Fifth-generation (5G) wireless networks are likely to offer high data rates, increased reliability, and low delay for mobile, personal, and local area networks. Along with the rapid growth of smart wireless sensing and communication technologies, data traffic has increased significantly and existing 5G networks are not able to fully support future massive data traffic for services, storage, and processing. To meet the challenges that are ahead, both research communities and industry are exploring the sixth generation (6G) Terahertz-based wireless network that is expected to be offered to industrial users in just ten years. Gaining knowledge and understanding of the different challenges and facets of 6G is crucial in meeting the requirements of future communication and addressing evolving quality of service (QoS) demands. This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G. It covers disruptive and innovative, integration of 6G with advanced architectures and networks such as software-defined networks (SDN), network functions virtualization (NFV), Cloud/Fog computing, and Artificial Intelligence (AI) oriented technologies. The survey also addresses privacy and security concerns and provides potential futuristic use cases such as virtual reality, smart healthcare, and Industry 5.0. Furthermore, it identifies the current challenges and outlines future research directions to facilitate the deployment of 6G networks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104040"},"PeriodicalIF":7.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.jnca.2024.104042
Mina Emami Khansari, Saeed Sharifian
Serverless computing has emerged as a new cloud computing model which in contrast to IoT offers unlimited and scalable access to resources. This paradigm improves resource utilization, cost, scalability and resource management specifically in terms of irregular incoming traffic. While cloud computing has been known as a reliable computing and storage solution to host IoT applications, it is not suitable for bandwidth limited, real time and secure applications. Therefore, shifting the resources of the cloud-edge continuum towards the edge can mitigate these limitations. In serverless architecture, applications implemented as Function as a Service (FaaS), include a set of chained event-driven microservices which have to be assigned to available instances. IoT microservices orchestration is still a challenging issue in serverless computing architecture due to IoT dynamic, heterogeneous and large-scale environment with limited resources. The integration of FaaS and distributed Deep Reinforcement Learning (DRL) can transform serverless computing by improving microservice execution effectiveness and optimizing real-time application orchestration. This combination improves scalability and adaptability across the edge-cloud continuum. In this paper, we present a novel Deep Reinforcement Learning (DRL) based microservice orchestration approach for the serverless edge-cloud continuum to minimize resource utilization and delay. This approach, unlike existing methods, is distributed and requires a minimum subset of realistic data in each interval to find optimal compositions in the proposed edge serverless architecture and is thus suitable for IoT environment. Experiments conducted using a number of real-world scenarios demonstrate improvement of the number of successfully composed applications by 18%, respectively, compared to state-of-the art methods including Load Balance, Shortest Path algorithms.
{"title":"A deep reinforcement learning approach towards distributed Function as a Service (FaaS) based edge application orchestration in cloud-edge continuum","authors":"Mina Emami Khansari, Saeed Sharifian","doi":"10.1016/j.jnca.2024.104042","DOIUrl":"10.1016/j.jnca.2024.104042","url":null,"abstract":"<div><div>Serverless computing has emerged as a new cloud computing model which in contrast to IoT offers unlimited and scalable access to resources. This paradigm improves resource utilization, cost, scalability and resource management specifically in terms of irregular incoming traffic. While cloud computing has been known as a reliable computing and storage solution to host IoT applications, it is not suitable for bandwidth limited, real time and secure applications. Therefore, shifting the resources of the cloud-edge continuum towards the edge can mitigate these limitations. In serverless architecture, applications implemented as Function as a Service (FaaS), include a set of chained event-driven microservices which have to be assigned to available instances. IoT microservices orchestration is still a challenging issue in serverless computing architecture due to IoT dynamic, heterogeneous and large-scale environment with limited resources. The integration of FaaS and distributed Deep Reinforcement Learning (DRL) can transform serverless computing by improving microservice execution effectiveness and optimizing real-time application orchestration. This combination improves scalability and adaptability across the edge-cloud continuum. In this paper, we present a novel Deep Reinforcement Learning (DRL) based microservice orchestration approach for the serverless edge-cloud continuum to minimize resource utilization and delay. This approach, unlike existing methods, is distributed and requires a minimum subset of realistic data in each interval to find optimal compositions in the proposed edge serverless architecture and is thus suitable for IoT environment. Experiments conducted using a number of real-world scenarios demonstrate improvement of the number of successfully composed applications by 18%, respectively, compared to state-of-the art methods including Load Balance, Shortest Path algorithms.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104042"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.jnca.2024.104036
Zhengqiu Weng , Weinuo Zhang , Tiantian Zhu , Zhenhao Dou , Haofei Sun , Zhanxiang Ye , Ye Tian
Advanced Persistent Threats (APTs) are prevalent in the field of cyber attacks, where attackers employ advanced techniques to control targets and exfiltrate data without being detected by the system. Existing APT detection methods heavily rely on expert rules or specific training scenarios, resulting in the lack of both generality and reliability. Therefore, this paper proposes a novel real-time APT attack anomaly detection system for large-scale provenance graphs, named RT-APT. Firstly, a provenance graph is constructed with kernel logs, and the WL subtree kernel algorithm is utilized to aggregate contextual information of nodes in the provenance graph. In this way we obtain vector representations. Secondly, the FlexSketch algorithm transforms the streaming provenance graph into a sequence of feature vectors. Finally, the K-means clustering algorithm is performed on benign feature vector sequences, where each cluster represents a different system state. Thus, we can identify abnormal behaviors during system execution. Therefore RT-APT enables to detect unknown attacks and extract long-term system behaviors. Experiments have been carried out to explore the optimal parameter settings under which RT-APT can perform best. In addition, we compare RT-APT and the state-of-the-art approaches on three datasets, Laboratory, StreamSpot and Unicorn. Results demonstrate that our proposed method outperforms the state-of-the-art approaches from the perspective of runtime performance, memory overhead and CPU usage.
{"title":"RT-APT: A real-time APT anomaly detection method for large-scale provenance graph","authors":"Zhengqiu Weng , Weinuo Zhang , Tiantian Zhu , Zhenhao Dou , Haofei Sun , Zhanxiang Ye , Ye Tian","doi":"10.1016/j.jnca.2024.104036","DOIUrl":"10.1016/j.jnca.2024.104036","url":null,"abstract":"<div><div>Advanced Persistent Threats (APTs) are prevalent in the field of cyber attacks, where attackers employ advanced techniques to control targets and exfiltrate data without being detected by the system. Existing APT detection methods heavily rely on expert rules or specific training scenarios, resulting in the lack of both generality and reliability. Therefore, this paper proposes a novel real-time APT attack anomaly detection system for large-scale provenance graphs, named RT-APT. Firstly, a provenance graph is constructed with kernel logs, and the WL subtree kernel algorithm is utilized to aggregate contextual information of nodes in the provenance graph. In this way we obtain vector representations. Secondly, the FlexSketch algorithm transforms the streaming provenance graph into a sequence of feature vectors. Finally, the K-means clustering algorithm is performed on benign feature vector sequences, where each cluster represents a different system state. Thus, we can identify abnormal behaviors during system execution. Therefore RT-APT enables to detect unknown attacks and extract long-term system behaviors. Experiments have been carried out to explore the optimal parameter settings under which RT-APT can perform best. In addition, we compare RT-APT and the state-of-the-art approaches on three datasets, Laboratory, StreamSpot and Unicorn. Results demonstrate that our proposed method outperforms the state-of-the-art approaches from the perspective of runtime performance, memory overhead and CPU usage.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104036"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.jnca.2024.104039
Mingyang Zhao, Chengtai Liu, Sifeng Zhu
With the surge of transportation data and diversification of services, the resources for data processing in intelligent transportation systems become more limited. In order to solve this problem, this paper studies the problem of computation offloading and resource allocation adopting edge computing, NOMA communication technology and edge(content) caching technology in intelligent transportation systems. The goal is to minimize the time consumption and energy consumption of the system for processing structured tasks of terminal devices by jointly optimizing the offloading decisions, caching strategies, computation resource allocation and transmission power allocation. This problem is a mixed integer nonlinear programming problem that is nonconvex. In order to solve this challenging problem, proposed a multi-task multi-objective optimization algorithm (MO-MFEA-S) with adaptive knowledge migration based on MO-MFEA. The results of a large number of simulation experiments demonstrate the convergence and effectiveness of MO-MFEA-S.
{"title":"Joint optimization scheme for task offloading and resource allocation based on MO-MFEA algorithm in intelligent transportation scenarios","authors":"Mingyang Zhao, Chengtai Liu, Sifeng Zhu","doi":"10.1016/j.jnca.2024.104039","DOIUrl":"10.1016/j.jnca.2024.104039","url":null,"abstract":"<div><div>With the surge of transportation data and diversification of services, the resources for data processing in intelligent transportation systems become more limited. In order to solve this problem, this paper studies the problem of computation offloading and resource allocation adopting edge computing, NOMA communication technology and edge(content) caching technology in intelligent transportation systems. The goal is to minimize the time consumption and energy consumption of the system for processing structured tasks of terminal devices by jointly optimizing the offloading decisions, caching strategies, computation resource allocation and transmission power allocation. This problem is a mixed integer nonlinear programming problem that is nonconvex. In order to solve this challenging problem, proposed a multi-task multi-objective optimization algorithm (MO-MFEA-S) with adaptive knowledge migration based on MO-MFEA. The results of a large number of simulation experiments demonstrate the convergence and effectiveness of MO-MFEA-S.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104039"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.jnca.2024.104038
Jiaqi Chen , Shuhang Han , Donghai Tian , Changzhen Hu
In a network, influence maximization addresses identifying an optimal set of nodes to initiate influence propagation, thereby maximizing the influence spread. Current approaches for influence maximization encounter limitations in accuracy and efficiency. Furthermore, most existing methods are aimed at the IC (Independent Cascade) diffusion model, and few solutions concern dynamic networks. In this study, we focus on dynamic networks consisting of UAV (Unmanned Aerial Vehicle) clusters that perform coverage tasks and introduce IMUNE, an evolutionary algorithm for influence maximization in UAV networks. We first generate dynamic networks that simulate UAV coverage tasks and give the representation of dynamic networks. Novel fitness functions in the evolutionary algorithm are designed to estimate the influence ability of a set of seed nodes in a dynamic process. On this basis, an integrated fitness function is proposed to fit both the IC and SI (Susceptible–Infected) models. IMUNE can find seed nodes for maximizing influence spread in dynamic UAV networks with different diffusion models through the improvements in fitness functions and search strategies. Experimental results on UAV network datasets show the effectiveness and efficiency of the IMUNE algorithm in solving influence maximization problems.
在网络中,影响力最大化是指确定一组最佳节点来启动影响力传播,从而实现影响力传播的最大化。目前的影响力最大化方法在准确性和效率方面都存在局限性。此外,大多数现有方法都是针对 IC(独立级联)扩散模型的,很少有解决方案涉及动态网络。在本研究中,我们将重点放在由执行覆盖任务的无人机(UAV)集群组成的动态网络上,并引入了 IMUNE,一种用于在无人机网络中实现影响力最大化的进化算法。我们首先生成模拟无人机覆盖任务的动态网络,并给出动态网络的表示方法。进化算法中新颖的适应度函数旨在估算一组种子节点在动态过程中的影响能力。在此基础上,提出了一种综合适配函数,可同时适用于 IC 和 SI(易受感染)模型。通过改进适配函数和搜索策略,IMUNE 可以在具有不同扩散模型的动态无人机网络中找到影响传播最大化的种子节点。在无人机网络数据集上的实验结果表明,IMUNE 算法在解决影响力最大化问题上是有效和高效的。
{"title":"IMUNE: A novel evolutionary algorithm for influence maximization in UAV networks","authors":"Jiaqi Chen , Shuhang Han , Donghai Tian , Changzhen Hu","doi":"10.1016/j.jnca.2024.104038","DOIUrl":"10.1016/j.jnca.2024.104038","url":null,"abstract":"<div><div>In a network, influence maximization addresses identifying an optimal set of nodes to initiate influence propagation, thereby maximizing the influence spread. Current approaches for influence maximization encounter limitations in accuracy and efficiency. Furthermore, most existing methods are aimed at the IC (Independent Cascade) diffusion model, and few solutions concern dynamic networks. In this study, we focus on dynamic networks consisting of UAV (Unmanned Aerial Vehicle) clusters that perform coverage tasks and introduce IMUNE, an evolutionary algorithm for influence maximization in UAV networks. We first generate dynamic networks that simulate UAV coverage tasks and give the representation of dynamic networks. Novel fitness functions in the evolutionary algorithm are designed to estimate the influence ability of a set of seed nodes in a dynamic process. On this basis, an integrated fitness function is proposed to fit both the IC and SI (Susceptible–Infected) models. IMUNE can find seed nodes for maximizing influence spread in dynamic UAV networks with different diffusion models through the improvements in fitness functions and search strategies. Experimental results on UAV network datasets show the effectiveness and efficiency of the IMUNE algorithm in solving influence maximization problems.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104038"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1016/j.jnca.2024.104041
Mohammed Talal , Salem Garfan , Rami Qays , Dragan Pamucar , Dursun Delen , Witold Pedrycz , Amneh Alamleh , Abdullah Alamoodi , B.B. Zaidan , Vladimir Simic
The fifth-generation (5G) network is considered a game-changing technology that promises advanced connectivity for businesses and growth opportunities. To gain a comprehensive understanding of this research domain, it is essential to scrutinize past research to investigate 5G-radio access network (RAN) architecture components and their interaction with computing tasks. This systematic literature review focuses on articles related to the past decade, specifically on machine learning models integrated with 5G-RAN architecture. The review disregards service types like the Internet of Medical Things, Internet of Things, and others provided by 5G-RAN. The review utilizes major databases such as IEEE Xplore, ScienceDirect, and Web of Science to locate highly cited peer-reviewed studies among 785 articles. After implementing a two-phase article filtration process, 143 articles are categorized into review articles (15/143) and learning-based development articles (128/143) based on the type of machine learning used in development. Motivational topics are highlighted, and recommendations are provided to facilitate and expedite the development of 5G-RAN. This review offers a learning-based mapping, delineating the current state of 5G-RAN architectures (e.g., O-RAN, C-RAN, HCRAN, and F-RAN, among others) in terms of computing capabilities and resource availability. Additionally, the article identifies the current concepts of ML prediction (categorical vs. value) that are implemented and discusses areas for future enhancements regarding the goal of network intelligence.
第五代(5G)网络被认为是改变游戏规则的技术,有望为企业带来先进的连接性和增长机会。为了全面了解这一研究领域,有必要仔细研究过去对 5G 无线接入网络(RAN)架构组件及其与计算任务的交互作用的研究。本系统性文献综述侧重于与过去十年相关的文章,特别是与 5G-RAN 架构集成的机器学习模型。本综述不涉及 5G-RAN 提供的医疗物联网、物联网等服务类型。综述利用 IEEE Xplore、ScienceDirect 和 Web of Science 等主要数据库,在 785 篇文章中找到了高引用率的同行评审研究。在对文章进行两阶段过滤后,根据开发中使用的机器学习类型,将 143 篇文章分为评论文章(15/143)和基于学习的开发文章(128/143)。其中突出强调了激励性主题,并提出了促进和加快 5G-RAN 发展的建议。本综述提供了基于学习的映射,从计算能力和资源可用性的角度描述了 5G-RAN 架构(如 O-RAN、C-RAN、HCRAN 和 F-RAN 等)的现状。此外,文章还指出了当前实施的 ML 预测概念(分类预测与价值预测),并讨论了未来在网络智能目标方面的改进领域。
{"title":"A comprehensive systematic review on machine learning application in the 5G-RAN architecture: Issues, challenges, and future directions","authors":"Mohammed Talal , Salem Garfan , Rami Qays , Dragan Pamucar , Dursun Delen , Witold Pedrycz , Amneh Alamleh , Abdullah Alamoodi , B.B. Zaidan , Vladimir Simic","doi":"10.1016/j.jnca.2024.104041","DOIUrl":"10.1016/j.jnca.2024.104041","url":null,"abstract":"<div><div>The fifth-generation (5G) network is considered a game-changing technology that promises advanced connectivity for businesses and growth opportunities. To gain a comprehensive understanding of this research domain, it is essential to scrutinize past research to investigate 5G-radio access network (RAN) architecture components and their interaction with computing tasks. This systematic literature review focuses on articles related to the past decade, specifically on machine learning models integrated with 5G-RAN architecture. The review disregards service types like the Internet of Medical Things, Internet of Things, and others provided by 5G-RAN. The review utilizes major databases such as IEEE Xplore, ScienceDirect, and Web of Science to locate highly cited peer-reviewed studies among 785 articles. After implementing a two-phase article filtration process, 143 articles are categorized into review articles (15/143) and learning-based development articles (128/143) based on the type of machine learning used in development. Motivational topics are highlighted, and recommendations are provided to facilitate and expedite the development of 5G-RAN. This review offers a learning-based mapping, delineating the current state of 5G-RAN architectures (e.g., O-RAN, C-RAN, HCRAN, and F-RAN, among others) in terms of computing capabilities and resource availability. Additionally, the article identifies the current concepts of ML prediction (categorical vs. value) that are implemented and discusses areas for future enhancements regarding the goal of network intelligence.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104041"},"PeriodicalIF":7.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142433588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid proliferation of Android apps has given rise to a dark side, where increasingly sophisticated malware poses a formidable challenge for detection. To combat this evolving threat, we present an explainable hybrid multi-modal framework. This framework leverages the power of deep learning, with a novel model fusion technique, to illuminate the hidden characteristics of malicious apps. Our approach combines models (leveraging late fusion approach) trained on attributes derived from static and dynamic analysis, hence utilizing the unique strengths of each model. We thoroughly analyze individual feature categories, feature ensembles, and model fusion using traditional machine learning classifiers and deep neural networks across diverse datasets. Our hybrid fused model outperforms others, achieving an F1-score of 99.97% on CICMaldroid2020. We use SHAP (SHapley Additive exPlanations) and t-SNE (t-distributed Stochastic Neighbor Embedding) to further analyze and interpret the best-performing model. We highlight the efficacy of our architectural design through an ablation study, revealing that our approach consistently achieves over 99% detection accuracy across multiple deep learning models. This paves the way groundwork for substantial advancements in security and risk mitigation within interconnected Android OS environments.
{"title":"Android malware defense through a hybrid multi-modal approach","authors":"Asmitha K.A. , Vinod P. , Rafidha Rehiman K.A. , Neeraj Raveendran , Mauro Conti","doi":"10.1016/j.jnca.2024.104035","DOIUrl":"10.1016/j.jnca.2024.104035","url":null,"abstract":"<div><div>The rapid proliferation of Android apps has given rise to a dark side, where increasingly sophisticated malware poses a formidable challenge for detection. To combat this evolving threat, we present an explainable hybrid multi-modal framework. This framework leverages the power of deep learning, with a novel model fusion technique, to illuminate the hidden characteristics of malicious apps. Our approach combines models (leveraging late fusion approach) trained on attributes derived from static and dynamic analysis, hence utilizing the unique strengths of each model. We thoroughly analyze individual feature categories, feature ensembles, and model fusion using traditional machine learning classifiers and deep neural networks across diverse datasets. Our hybrid fused model outperforms others, achieving an F1-score of 99.97% on CICMaldroid2020. We use SHAP (SHapley Additive exPlanations) and t-SNE (t-distributed Stochastic Neighbor Embedding) to further analyze and interpret the best-performing model. We highlight the efficacy of our architectural design through an ablation study, revealing that our approach consistently achieves over 99% detection accuracy across multiple deep learning models. This paves the way groundwork for substantial advancements in security and risk mitigation within interconnected Android OS environments.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104035"},"PeriodicalIF":7.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142433589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.1016/j.jnca.2024.104034
Moez Krichen , Mohamed S. Abdalzaher
The advent of machine learning (ML) and Artificial intelligence (AI) has brought about a significant transformation across multiple industries, as it has facilitated the automation of jobs, extraction of valuable insights from extensive datasets, and facilitation of sophisticated decision-making processes. Nevertheless, optimizing efficiency has become a critical research field due to AI systems’ increasing complexity and resource requirements. This paper provides an extensive examination of several techniques and methodologies aimed at improving the efficiency of ML and artificial intelligence. In this study, we investigate many areas of research about AI. These areas include algorithmic improvements, hardware acceleration techniques, data pretreatment methods, model compression approaches, distributed computing frameworks, energy-efficient strategies, fundamental concepts related to AI, AI efficiency evaluation, and formal methodologies. Furthermore, we engage in an examination of the obstacles and prospective avenues in this particular domain. This paper offers a deep analysis of many subjects to equip researchers and practitioners with sufficient strategies to enhance efficiency within ML and AI systems. More particularly, the paper provides an extensive analysis of efficiency-enhancing techniques across multiple dimensions: algorithmic advancements, hardware acceleration, data processing, model compression, distributed computing, and energy consumption.
机器学习(ML)和人工智能(AI)的出现为多个行业带来了重大变革,因为它促进了工作自动化,从大量数据集中提取有价值的见解,并推动了复杂的决策过程。然而,由于人工智能系统的复杂性和资源需求不断增加,优化效率已成为一个重要的研究领域。本文对旨在提高 ML 和人工智能效率的几种技术和方法进行了广泛研究。在这项研究中,我们调查了有关人工智能的多个研究领域。这些领域包括算法改进、硬件加速技术、数据预处理方法、模型压缩方法、分布式计算框架、节能策略、与人工智能相关的基本概念、人工智能效率评估和形式方法论。此外,我们还研究了这一特定领域的障碍和前景。本文对许多主题进行了深入分析,为研究人员和从业人员提供了充分的策略,以提高 ML 和 AI 系统的效率。更具体地说,本文从多个维度对提高效率的技术进行了广泛分析:算法进步、硬件加速、数据处理、模型压缩、分布式计算和能源消耗。
{"title":"Performance enhancement of artificial intelligence: A survey","authors":"Moez Krichen , Mohamed S. Abdalzaher","doi":"10.1016/j.jnca.2024.104034","DOIUrl":"10.1016/j.jnca.2024.104034","url":null,"abstract":"<div><div>The advent of machine learning (ML) and Artificial intelligence (AI) has brought about a significant transformation across multiple industries, as it has facilitated the automation of jobs, extraction of valuable insights from extensive datasets, and facilitation of sophisticated decision-making processes. Nevertheless, optimizing efficiency has become a critical research field due to AI systems’ increasing complexity and resource requirements. This paper provides an extensive examination of several techniques and methodologies aimed at improving the efficiency of ML and artificial intelligence. In this study, we investigate many areas of research about AI. These areas include algorithmic improvements, hardware acceleration techniques, data pretreatment methods, model compression approaches, distributed computing frameworks, energy-efficient strategies, fundamental concepts related to AI, AI efficiency evaluation, and formal methodologies. Furthermore, we engage in an examination of the obstacles and prospective avenues in this particular domain. This paper offers a deep analysis of many subjects to equip researchers and practitioners with sufficient strategies to enhance efficiency within ML and AI systems. More particularly, the paper provides an extensive analysis of efficiency-enhancing techniques across multiple dimensions: algorithmic advancements, hardware acceleration, data processing, model compression, distributed computing, and energy consumption.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104034"},"PeriodicalIF":7.7,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142326905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-24DOI: 10.1016/j.jnca.2024.104030
Amirmohammad Karamzadeh, Alireza Shameli-Sendi
In recent years, serverless computing has gained considerable attention in academic, professional, and business circles. Unique features such as code development flexibility and the cost-efficient pay-as-you-go pricing model have led to predictions of widespread adoption of serverless services. Major players in the cloud computing sector, including industry giants like Amazon, Google, and Microsoft, have made significant advancements in the field of serverless services. However, cloud computing faces complex challenges, with two prominent ones being the latency caused by cold start instances and security vulnerabilities associated with container escapes. These challenges undermine the smooth execution of isolated functions, a concern amplified by technologies like Google gVisor and Kata Containers. While the integration of tools like lightweight virtual machines has alleviated concerns about container escape vulnerabilities, the primary issue remains the increased delay experienced during cold start instances in the execution of serverless functions. The purpose of this research is to propose an architecture that reduces cold start delay overhead by utilizing lightweight virtual machines within a commercial architecture, thereby achieving a setup that closely resembles real-world scenarios. This research employs supervised learning methodologies to predict function invocations by leveraging the execution patterns of other program functions. The goal is to proactively mitigate cold start scenarios by invoking the target function before actual user initiation, effectively transitioning from cold starts to warm starts. In this study, we compared our approach with two fixed and variable window strategies. Commercial platforms like Knative, OpenFaaS, and OpenWhisk typically employ a fixed 15-minute window during cold starts. In contrast to these platforms, our approach demonstrated a significant reduction in cold start incidents. Specifically, when calling a function 200 times with 5, 10, and 20 invocations within one hour, our approach achieved reductions in cold starts by 83.33%, 92.13%, and 90.90%, respectively. Compared to the variable window approach, which adjusts the window based on cold start values, our proposed approach was able to prevent 82.92%, 91.66%, and 90.56% of cold starts for the same scenario. These results highlight the effectiveness of our approach in significantly reducing cold starts, thereby enhancing the performance and responsiveness of serverless functions. Our method outperformed both fixed and variable window strategies, making it a valuable contribution to the field of serverless computing. Additionally, the implementation of pre-invocation strategies to convert cold starts into warm starts results in a substantial reduction in the execution time of functions within lightweight virtual machines.
{"title":"Reducing cold start delay in serverless computing using lightweight virtual machines","authors":"Amirmohammad Karamzadeh, Alireza Shameli-Sendi","doi":"10.1016/j.jnca.2024.104030","DOIUrl":"10.1016/j.jnca.2024.104030","url":null,"abstract":"<div><div>In recent years, serverless computing has gained considerable attention in academic, professional, and business circles. Unique features such as code development flexibility and the cost-efficient pay-as-you-go pricing model have led to predictions of widespread adoption of serverless services. Major players in the cloud computing sector, including industry giants like Amazon, Google, and Microsoft, have made significant advancements in the field of serverless services. However, cloud computing faces complex challenges, with two prominent ones being the latency caused by cold start instances and security vulnerabilities associated with container escapes. These challenges undermine the smooth execution of isolated functions, a concern amplified by technologies like Google gVisor and Kata Containers. While the integration of tools like lightweight virtual machines has alleviated concerns about container escape vulnerabilities, the primary issue remains the increased delay experienced during cold start instances in the execution of serverless functions. The purpose of this research is to propose an architecture that reduces cold start delay overhead by utilizing lightweight virtual machines within a commercial architecture, thereby achieving a setup that closely resembles real-world scenarios. This research employs supervised learning methodologies to predict function invocations by leveraging the execution patterns of other program functions. The goal is to proactively mitigate cold start scenarios by invoking the target function before actual user initiation, effectively transitioning from cold starts to warm starts. In this study, we compared our approach with two fixed and variable window strategies. Commercial platforms like Knative, OpenFaaS, and OpenWhisk typically employ a fixed 15-minute window during cold starts. In contrast to these platforms, our approach demonstrated a significant reduction in cold start incidents. Specifically, when calling a function 200 times with 5, 10, and 20 invocations within one hour, our approach achieved reductions in cold starts by 83.33%, 92.13%, and 90.90%, respectively. Compared to the variable window approach, which adjusts the window based on cold start values, our proposed approach was able to prevent 82.92%, 91.66%, and 90.56% of cold starts for the same scenario. These results highlight the effectiveness of our approach in significantly reducing cold starts, thereby enhancing the performance and responsiveness of serverless functions. Our method outperformed both fixed and variable window strategies, making it a valuable contribution to the field of serverless computing. Additionally, the implementation of pre-invocation strategies to convert cold starts into warm starts results in a substantial reduction in the execution time of functions within lightweight virtual machines.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104030"},"PeriodicalIF":7.7,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142419861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-21DOI: 10.1016/j.jnca.2024.104033
Bo Yin, Zeshu Ai, Jun Lu, Ying Feng
Crowdsourcing provides a new problem-solving paradigm that utilizes the intelligence of crowds to solve computer-hard problems. Task assignment is a foundation problem in crowdsourcing systems and applications. However, existing task assignment approaches often assume that workers operate independently. In reality, worker cooperation is necessary. In this paper, we address the cooperative task assignment (CTA) problem where a worker needs to pay a monetary cost to another worker in exchange for cooperation. Cooperative working also requires one task to be assigned to more than one worker to ensure the reliability of crowdsourcing services. We formalize the CTA problem with the goal of minimizing the total cooperation cost of all workers under the workload limitation of each worker. The challenge is that the individual cooperation cost that a worker pays for a specific task highly depends on the task distribution. This increases the difficulty of obtaining the assignment instance with a small cooperation cost. We prove that the CTA problem is NP-hard. We propose a two-stage cooperative task assignment framework that first assigns each task to one worker and then makes duplicate assignments. We also present solutions to address the dynamic scenarios. Extensive experimental results show that the proposed framework can effectively reduce the cooperation cost.
{"title":"A cooperative task assignment framework with minimum cooperation cost in crowdsourcing systems","authors":"Bo Yin, Zeshu Ai, Jun Lu, Ying Feng","doi":"10.1016/j.jnca.2024.104033","DOIUrl":"10.1016/j.jnca.2024.104033","url":null,"abstract":"<div><div>Crowdsourcing provides a new problem-solving paradigm that utilizes the intelligence of crowds to solve computer-hard problems. Task assignment is a foundation problem in crowdsourcing systems and applications. However, existing task assignment approaches often assume that workers operate independently. In reality, worker cooperation is necessary. In this paper, we address the cooperative task assignment (CTA) problem where a worker needs to pay a monetary cost to another worker in exchange for cooperation. Cooperative working also requires one task to be assigned to more than one worker to ensure the reliability of crowdsourcing services. We formalize the CTA problem with the goal of minimizing the total cooperation cost of all workers under the workload limitation of each worker. The challenge is that the individual cooperation cost that a worker pays for a specific task highly depends on the task distribution. This increases the difficulty of obtaining the assignment instance with a small cooperation cost. We prove that the CTA problem is NP-hard. We propose a two-stage cooperative task assignment framework that first assigns each task to one worker and then makes duplicate assignments. We also present solutions to address the dynamic scenarios. Extensive experimental results show that the proposed framework can effectively reduce the cooperation cost.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104033"},"PeriodicalIF":7.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}