首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
On challenges of sixth-generation (6G) wireless networks: A comprehensive survey of requirements, applications, and security issues 第六代(6G)无线网络的挑战:需求、应用和安全问题综合调查
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-14 DOI: 10.1016/j.jnca.2024.104040
Muhammad Sajjad Akbar , Zawar Hussain , Muhammad Ikram , Quan Z. Sheng , Subhas Chandra Mukhopadhyay
Fifth-generation (5G) wireless networks are likely to offer high data rates, increased reliability, and low delay for mobile, personal, and local area networks. Along with the rapid growth of smart wireless sensing and communication technologies, data traffic has increased significantly and existing 5G networks are not able to fully support future massive data traffic for services, storage, and processing. To meet the challenges that are ahead, both research communities and industry are exploring the sixth generation (6G) Terahertz-based wireless network that is expected to be offered to industrial users in just ten years. Gaining knowledge and understanding of the different challenges and facets of 6G is crucial in meeting the requirements of future communication and addressing evolving quality of service (QoS) demands. This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G. It covers disruptive and innovative, integration of 6G with advanced architectures and networks such as software-defined networks (SDN), network functions virtualization (NFV), Cloud/Fog computing, and Artificial Intelligence (AI) oriented technologies. The survey also addresses privacy and security concerns and provides potential futuristic use cases such as virtual reality, smart healthcare, and Industry 5.0. Furthermore, it identifies the current challenges and outlines future research directions to facilitate the deployment of 6G networks.
第五代(5G)无线网络有可能为移动、个人和局域网提供高数据速率、更高可靠性和低延迟。随着智能无线传感和通信技术的快速发展,数据流量大幅增加,现有的 5G 网络无法完全支持未来服务、存储和处理的海量数据流量。为了应对未来的挑战,研究界和工业界都在探索基于太赫兹的第六代(6G)无线网络,预计在短短十年内就能提供给工业用户。要满足未来通信的要求并满足不断发展的服务质量(QoS)需求,了解和理解 6G 的不同挑战和方面至关重要。本调查报告全面考察了与 6G 相关的规范、要求、应用和使能技术。调查内容包括 6G 与软件定义网络 (SDN)、网络功能虚拟化 (NFV)、云/雾计算和人工智能 (AI) 等先进架构和网络的颠覆性创新整合。调查还涉及隐私和安全问题,并提供了潜在的未来用例,如虚拟现实、智能医疗和工业 5.0。此外,调查还指出了当前面临的挑战,并概述了未来的研究方向,以促进 6G 网络的部署。
{"title":"On challenges of sixth-generation (6G) wireless networks: A comprehensive survey of requirements, applications, and security issues","authors":"Muhammad Sajjad Akbar ,&nbsp;Zawar Hussain ,&nbsp;Muhammad Ikram ,&nbsp;Quan Z. Sheng ,&nbsp;Subhas Chandra Mukhopadhyay","doi":"10.1016/j.jnca.2024.104040","DOIUrl":"10.1016/j.jnca.2024.104040","url":null,"abstract":"<div><div>Fifth-generation (5G) wireless networks are likely to offer high data rates, increased reliability, and low delay for mobile, personal, and local area networks. Along with the rapid growth of smart wireless sensing and communication technologies, data traffic has increased significantly and existing 5G networks are not able to fully support future massive data traffic for services, storage, and processing. To meet the challenges that are ahead, both research communities and industry are exploring the sixth generation (6G) Terahertz-based wireless network that is expected to be offered to industrial users in just ten years. Gaining knowledge and understanding of the different challenges and facets of 6G is crucial in meeting the requirements of future communication and addressing evolving quality of service (QoS) demands. This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G. It covers disruptive and innovative, integration of 6G with advanced architectures and networks such as software-defined networks (SDN), network functions virtualization (NFV), Cloud/Fog computing, and Artificial Intelligence (AI) oriented technologies. The survey also addresses privacy and security concerns and provides potential futuristic use cases such as virtual reality, smart healthcare, and Industry 5.0. Furthermore, it identifies the current challenges and outlines future research directions to facilitate the deployment of 6G networks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104040"},"PeriodicalIF":7.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep reinforcement learning approach towards distributed Function as a Service (FaaS) based edge application orchestration in cloud-edge continuum 云-边缘连续体中基于边缘应用协调的分布式功能即服务(FaaS)深度强化学习方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.jnca.2024.104042
Mina Emami Khansari, Saeed Sharifian
Serverless computing has emerged as a new cloud computing model which in contrast to IoT offers unlimited and scalable access to resources. This paradigm improves resource utilization, cost, scalability and resource management specifically in terms of irregular incoming traffic. While cloud computing has been known as a reliable computing and storage solution to host IoT applications, it is not suitable for bandwidth limited, real time and secure applications. Therefore, shifting the resources of the cloud-edge continuum towards the edge can mitigate these limitations. In serverless architecture, applications implemented as Function as a Service (FaaS), include a set of chained event-driven microservices which have to be assigned to available instances. IoT microservices orchestration is still a challenging issue in serverless computing architecture due to IoT dynamic, heterogeneous and large-scale environment with limited resources. The integration of FaaS and distributed Deep Reinforcement Learning (DRL) can transform serverless computing by improving microservice execution effectiveness and optimizing real-time application orchestration. This combination improves scalability and adaptability across the edge-cloud continuum. In this paper, we present a novel Deep Reinforcement Learning (DRL) based microservice orchestration approach for the serverless edge-cloud continuum to minimize resource utilization and delay. This approach, unlike existing methods, is distributed and requires a minimum subset of realistic data in each interval to find optimal compositions in the proposed edge serverless architecture and is thus suitable for IoT environment. Experiments conducted using a number of real-world scenarios demonstrate improvement of the number of successfully composed applications by 18%, respectively, compared to state-of-the art methods including Load Balance, Shortest Path algorithms.
无服务器计算作为一种新的云计算模式已经出现,与物联网相比,它提供了无限和可扩展的资源访问。这种模式提高了资源利用率、成本、可扩展性和资源管理,特别是在不规则输入流量方面。众所周知,云计算是托管物联网应用的可靠计算和存储解决方案,但它并不适合带宽有限、实时和安全的应用。因此,将云-边缘连续体的资源转向边缘可以缓解这些限制。在无服务器架构中,以功能即服务(FaaS)方式实施的应用包括一系列链式事件驱动微服务,这些微服务必须分配给可用实例。由于物联网的动态性、异构性和大规模环境资源有限,物联网微服务的协调仍然是无服务器计算架构中一个具有挑战性的问题。FaaS 与分布式深度强化学习(DRL)的集成可以通过提高微服务执行效率和优化实时应用协调来改变无服务器计算。这种结合提高了整个边缘-云连续体的可扩展性和适应性。在本文中,我们提出了一种基于深度强化学习(DRL)的新型微服务协调方法,用于无服务器边缘-云连续体,以最大限度地降低资源利用率和延迟。与现有方法不同的是,这种方法是分布式的,只需要每个区间的最小现实数据子集,就能在拟议的无服务器边缘架构中找到最佳组合,因此适用于物联网环境。使用大量真实场景进行的实验表明,与包括负载平衡、最短路径算法在内的最先进方法相比,成功合成的应用程序数量分别提高了 18%。
{"title":"A deep reinforcement learning approach towards distributed Function as a Service (FaaS) based edge application orchestration in cloud-edge continuum","authors":"Mina Emami Khansari,&nbsp;Saeed Sharifian","doi":"10.1016/j.jnca.2024.104042","DOIUrl":"10.1016/j.jnca.2024.104042","url":null,"abstract":"<div><div>Serverless computing has emerged as a new cloud computing model which in contrast to IoT offers unlimited and scalable access to resources. This paradigm improves resource utilization, cost, scalability and resource management specifically in terms of irregular incoming traffic. While cloud computing has been known as a reliable computing and storage solution to host IoT applications, it is not suitable for bandwidth limited, real time and secure applications. Therefore, shifting the resources of the cloud-edge continuum towards the edge can mitigate these limitations. In serverless architecture, applications implemented as Function as a Service (FaaS), include a set of chained event-driven microservices which have to be assigned to available instances. IoT microservices orchestration is still a challenging issue in serverless computing architecture due to IoT dynamic, heterogeneous and large-scale environment with limited resources. The integration of FaaS and distributed Deep Reinforcement Learning (DRL) can transform serverless computing by improving microservice execution effectiveness and optimizing real-time application orchestration. This combination improves scalability and adaptability across the edge-cloud continuum. In this paper, we present a novel Deep Reinforcement Learning (DRL) based microservice orchestration approach for the serverless edge-cloud continuum to minimize resource utilization and delay. This approach, unlike existing methods, is distributed and requires a minimum subset of realistic data in each interval to find optimal compositions in the proposed edge serverless architecture and is thus suitable for IoT environment. Experiments conducted using a number of real-world scenarios demonstrate improvement of the number of successfully composed applications by 18%, respectively, compared to state-of-the art methods including Load Balance, Shortest Path algorithms.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104042"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RT-APT: A real-time APT anomaly detection method for large-scale provenance graph RT-APT:大规模出处图的实时 APT 异常检测方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.jnca.2024.104036
Zhengqiu Weng , Weinuo Zhang , Tiantian Zhu , Zhenhao Dou , Haofei Sun , Zhanxiang Ye , Ye Tian
Advanced Persistent Threats (APTs) are prevalent in the field of cyber attacks, where attackers employ advanced techniques to control targets and exfiltrate data without being detected by the system. Existing APT detection methods heavily rely on expert rules or specific training scenarios, resulting in the lack of both generality and reliability. Therefore, this paper proposes a novel real-time APT attack anomaly detection system for large-scale provenance graphs, named RT-APT. Firstly, a provenance graph is constructed with kernel logs, and the WL subtree kernel algorithm is utilized to aggregate contextual information of nodes in the provenance graph. In this way we obtain vector representations. Secondly, the FlexSketch algorithm transforms the streaming provenance graph into a sequence of feature vectors. Finally, the K-means clustering algorithm is performed on benign feature vector sequences, where each cluster represents a different system state. Thus, we can identify abnormal behaviors during system execution. Therefore RT-APT enables to detect unknown attacks and extract long-term system behaviors. Experiments have been carried out to explore the optimal parameter settings under which RT-APT can perform best. In addition, we compare RT-APT and the state-of-the-art approaches on three datasets, Laboratory, StreamSpot and Unicorn. Results demonstrate that our proposed method outperforms the state-of-the-art approaches from the perspective of runtime performance, memory overhead and CPU usage.
高级持续性威胁(APT)在网络攻击领域非常普遍,攻击者利用先进技术控制目标,并在不被系统检测到的情况下外流数据。现有的 APT 检测方法严重依赖专家规则或特定的训练场景,因而缺乏通用性和可靠性。因此,本文提出了一种针对大规模来源图的新型实时 APT 攻击异常检测系统,命名为 RT-APT。首先,利用内核日志构建出处图,并利用 WL 子树内核算法聚合出处图中节点的上下文信息。这样,我们就获得了向量表示。其次,FlexSketch 算法将流式来源图转换为特征向量序列。最后,在良性特征向量序列上执行 K-means 聚类算法,每个聚类代表不同的系统状态。因此,我们可以识别系统执行过程中的异常行为。因此,RT-APT 能够检测未知攻击并提取长期系统行为。我们通过实验探索了 RT-APT 性能最佳的参数设置。此外,我们还在实验室、StreamSpot 和 Unicorn 三个数据集上比较了 RT-APT 和最先进的方法。结果表明,从运行时性能、内存开销和 CPU 占用率的角度来看,我们提出的方法优于最先进的方法。
{"title":"RT-APT: A real-time APT anomaly detection method for large-scale provenance graph","authors":"Zhengqiu Weng ,&nbsp;Weinuo Zhang ,&nbsp;Tiantian Zhu ,&nbsp;Zhenhao Dou ,&nbsp;Haofei Sun ,&nbsp;Zhanxiang Ye ,&nbsp;Ye Tian","doi":"10.1016/j.jnca.2024.104036","DOIUrl":"10.1016/j.jnca.2024.104036","url":null,"abstract":"<div><div>Advanced Persistent Threats (APTs) are prevalent in the field of cyber attacks, where attackers employ advanced techniques to control targets and exfiltrate data without being detected by the system. Existing APT detection methods heavily rely on expert rules or specific training scenarios, resulting in the lack of both generality and reliability. Therefore, this paper proposes a novel real-time APT attack anomaly detection system for large-scale provenance graphs, named RT-APT. Firstly, a provenance graph is constructed with kernel logs, and the WL subtree kernel algorithm is utilized to aggregate contextual information of nodes in the provenance graph. In this way we obtain vector representations. Secondly, the FlexSketch algorithm transforms the streaming provenance graph into a sequence of feature vectors. Finally, the K-means clustering algorithm is performed on benign feature vector sequences, where each cluster represents a different system state. Thus, we can identify abnormal behaviors during system execution. Therefore RT-APT enables to detect unknown attacks and extract long-term system behaviors. Experiments have been carried out to explore the optimal parameter settings under which RT-APT can perform best. In addition, we compare RT-APT and the state-of-the-art approaches on three datasets, Laboratory, StreamSpot and Unicorn. Results demonstrate that our proposed method outperforms the state-of-the-art approaches from the perspective of runtime performance, memory overhead and CPU usage.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104036"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint optimization scheme for task offloading and resource allocation based on MO-MFEA algorithm in intelligent transportation scenarios 基于 MO-MFEA 算法的智能交通场景下任务卸载和资源分配联合优化方案
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.jnca.2024.104039
Mingyang Zhao, Chengtai Liu, Sifeng Zhu
With the surge of transportation data and diversification of services, the resources for data processing in intelligent transportation systems become more limited. In order to solve this problem, this paper studies the problem of computation offloading and resource allocation adopting edge computing, NOMA communication technology and edge(content) caching technology in intelligent transportation systems. The goal is to minimize the time consumption and energy consumption of the system for processing structured tasks of terminal devices by jointly optimizing the offloading decisions, caching strategies, computation resource allocation and transmission power allocation. This problem is a mixed integer nonlinear programming problem that is nonconvex. In order to solve this challenging problem, proposed a multi-task multi-objective optimization algorithm (MO-MFEA-S) with adaptive knowledge migration based on MO-MFEA. The results of a large number of simulation experiments demonstrate the convergence and effectiveness of MO-MFEA-S.
随着交通数据的激增和服务的多样化,智能交通系统中用于数据处理的资源变得越来越有限。为了解决这一问题,本文研究了智能交通系统中采用边缘计算、NOMA 通信技术和边缘(内容)缓存技术的计算卸载和资源分配问题。目标是通过联合优化卸载决策、缓存策略、计算资源分配和传输功率分配,最大限度地减少系统处理终端设备结构化任务的时间消耗和能源消耗。这个问题是一个非凸的混合整数非线性编程问题。为了解决这个具有挑战性的问题,提出了一种基于 MO-MFEA 的自适应知识迁移的多任务多目标优化算法(MO-MFEA-S)。大量仿真实验结果证明了 MO-MFEA-S 的收敛性和有效性。
{"title":"Joint optimization scheme for task offloading and resource allocation based on MO-MFEA algorithm in intelligent transportation scenarios","authors":"Mingyang Zhao,&nbsp;Chengtai Liu,&nbsp;Sifeng Zhu","doi":"10.1016/j.jnca.2024.104039","DOIUrl":"10.1016/j.jnca.2024.104039","url":null,"abstract":"<div><div>With the surge of transportation data and diversification of services, the resources for data processing in intelligent transportation systems become more limited. In order to solve this problem, this paper studies the problem of computation offloading and resource allocation adopting edge computing, NOMA communication technology and edge(content) caching technology in intelligent transportation systems. The goal is to minimize the time consumption and energy consumption of the system for processing structured tasks of terminal devices by jointly optimizing the offloading decisions, caching strategies, computation resource allocation and transmission power allocation. This problem is a mixed integer nonlinear programming problem that is nonconvex. In order to solve this challenging problem, proposed a multi-task multi-objective optimization algorithm (MO-MFEA-S) with adaptive knowledge migration based on MO-MFEA. The results of a large number of simulation experiments demonstrate the convergence and effectiveness of MO-MFEA-S.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104039"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMUNE: A novel evolutionary algorithm for influence maximization in UAV networks IMUNE:无人飞行器网络中影响最大化的新型进化算法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.jnca.2024.104038
Jiaqi Chen , Shuhang Han , Donghai Tian , Changzhen Hu
In a network, influence maximization addresses identifying an optimal set of nodes to initiate influence propagation, thereby maximizing the influence spread. Current approaches for influence maximization encounter limitations in accuracy and efficiency. Furthermore, most existing methods are aimed at the IC (Independent Cascade) diffusion model, and few solutions concern dynamic networks. In this study, we focus on dynamic networks consisting of UAV (Unmanned Aerial Vehicle) clusters that perform coverage tasks and introduce IMUNE, an evolutionary algorithm for influence maximization in UAV networks. We first generate dynamic networks that simulate UAV coverage tasks and give the representation of dynamic networks. Novel fitness functions in the evolutionary algorithm are designed to estimate the influence ability of a set of seed nodes in a dynamic process. On this basis, an integrated fitness function is proposed to fit both the IC and SI (Susceptible–Infected) models. IMUNE can find seed nodes for maximizing influence spread in dynamic UAV networks with different diffusion models through the improvements in fitness functions and search strategies. Experimental results on UAV network datasets show the effectiveness and efficiency of the IMUNE algorithm in solving influence maximization problems.
在网络中,影响力最大化是指确定一组最佳节点来启动影响力传播,从而实现影响力传播的最大化。目前的影响力最大化方法在准确性和效率方面都存在局限性。此外,大多数现有方法都是针对 IC(独立级联)扩散模型的,很少有解决方案涉及动态网络。在本研究中,我们将重点放在由执行覆盖任务的无人机(UAV)集群组成的动态网络上,并引入了 IMUNE,一种用于在无人机网络中实现影响力最大化的进化算法。我们首先生成模拟无人机覆盖任务的动态网络,并给出动态网络的表示方法。进化算法中新颖的适应度函数旨在估算一组种子节点在动态过程中的影响能力。在此基础上,提出了一种综合适配函数,可同时适用于 IC 和 SI(易受感染)模型。通过改进适配函数和搜索策略,IMUNE 可以在具有不同扩散模型的动态无人机网络中找到影响传播最大化的种子节点。在无人机网络数据集上的实验结果表明,IMUNE 算法在解决影响力最大化问题上是有效和高效的。
{"title":"IMUNE: A novel evolutionary algorithm for influence maximization in UAV networks","authors":"Jiaqi Chen ,&nbsp;Shuhang Han ,&nbsp;Donghai Tian ,&nbsp;Changzhen Hu","doi":"10.1016/j.jnca.2024.104038","DOIUrl":"10.1016/j.jnca.2024.104038","url":null,"abstract":"<div><div>In a network, influence maximization addresses identifying an optimal set of nodes to initiate influence propagation, thereby maximizing the influence spread. Current approaches for influence maximization encounter limitations in accuracy and efficiency. Furthermore, most existing methods are aimed at the IC (Independent Cascade) diffusion model, and few solutions concern dynamic networks. In this study, we focus on dynamic networks consisting of UAV (Unmanned Aerial Vehicle) clusters that perform coverage tasks and introduce IMUNE, an evolutionary algorithm for influence maximization in UAV networks. We first generate dynamic networks that simulate UAV coverage tasks and give the representation of dynamic networks. Novel fitness functions in the evolutionary algorithm are designed to estimate the influence ability of a set of seed nodes in a dynamic process. On this basis, an integrated fitness function is proposed to fit both the IC and SI (Susceptible–Infected) models. IMUNE can find seed nodes for maximizing influence spread in dynamic UAV networks with different diffusion models through the improvements in fitness functions and search strategies. Experimental results on UAV network datasets show the effectiveness and efficiency of the IMUNE algorithm in solving influence maximization problems.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104038"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive systematic review on machine learning application in the 5G-RAN architecture: Issues, challenges, and future directions 关于 5G-RAN 架构中机器学习应用的全面系统综述:问题、挑战和未来方向
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-09 DOI: 10.1016/j.jnca.2024.104041
Mohammed Talal , Salem Garfan , Rami Qays , Dragan Pamucar , Dursun Delen , Witold Pedrycz , Amneh Alamleh , Abdullah Alamoodi , B.B. Zaidan , Vladimir Simic
The fifth-generation (5G) network is considered a game-changing technology that promises advanced connectivity for businesses and growth opportunities. To gain a comprehensive understanding of this research domain, it is essential to scrutinize past research to investigate 5G-radio access network (RAN) architecture components and their interaction with computing tasks. This systematic literature review focuses on articles related to the past decade, specifically on machine learning models integrated with 5G-RAN architecture. The review disregards service types like the Internet of Medical Things, Internet of Things, and others provided by 5G-RAN. The review utilizes major databases such as IEEE Xplore, ScienceDirect, and Web of Science to locate highly cited peer-reviewed studies among 785 articles. After implementing a two-phase article filtration process, 143 articles are categorized into review articles (15/143) and learning-based development articles (128/143) based on the type of machine learning used in development. Motivational topics are highlighted, and recommendations are provided to facilitate and expedite the development of 5G-RAN. This review offers a learning-based mapping, delineating the current state of 5G-RAN architectures (e.g., O-RAN, C-RAN, HCRAN, and F-RAN, among others) in terms of computing capabilities and resource availability. Additionally, the article identifies the current concepts of ML prediction (categorical vs. value) that are implemented and discusses areas for future enhancements regarding the goal of network intelligence.
第五代(5G)网络被认为是改变游戏规则的技术,有望为企业带来先进的连接性和增长机会。为了全面了解这一研究领域,有必要仔细研究过去对 5G 无线接入网络(RAN)架构组件及其与计算任务的交互作用的研究。本系统性文献综述侧重于与过去十年相关的文章,特别是与 5G-RAN 架构集成的机器学习模型。本综述不涉及 5G-RAN 提供的医疗物联网、物联网等服务类型。综述利用 IEEE Xplore、ScienceDirect 和 Web of Science 等主要数据库,在 785 篇文章中找到了高引用率的同行评审研究。在对文章进行两阶段过滤后,根据开发中使用的机器学习类型,将 143 篇文章分为评论文章(15/143)和基于学习的开发文章(128/143)。其中突出强调了激励性主题,并提出了促进和加快 5G-RAN 发展的建议。本综述提供了基于学习的映射,从计算能力和资源可用性的角度描述了 5G-RAN 架构(如 O-RAN、C-RAN、HCRAN 和 F-RAN 等)的现状。此外,文章还指出了当前实施的 ML 预测概念(分类预测与价值预测),并讨论了未来在网络智能目标方面的改进领域。
{"title":"A comprehensive systematic review on machine learning application in the 5G-RAN architecture: Issues, challenges, and future directions","authors":"Mohammed Talal ,&nbsp;Salem Garfan ,&nbsp;Rami Qays ,&nbsp;Dragan Pamucar ,&nbsp;Dursun Delen ,&nbsp;Witold Pedrycz ,&nbsp;Amneh Alamleh ,&nbsp;Abdullah Alamoodi ,&nbsp;B.B. Zaidan ,&nbsp;Vladimir Simic","doi":"10.1016/j.jnca.2024.104041","DOIUrl":"10.1016/j.jnca.2024.104041","url":null,"abstract":"<div><div>The fifth-generation (5G) network is considered a game-changing technology that promises advanced connectivity for businesses and growth opportunities. To gain a comprehensive understanding of this research domain, it is essential to scrutinize past research to investigate 5G-radio access network (RAN) architecture components and their interaction with computing tasks. This systematic literature review focuses on articles related to the past decade, specifically on machine learning models integrated with 5G-RAN architecture. The review disregards service types like the Internet of Medical Things, Internet of Things, and others provided by 5G-RAN. The review utilizes major databases such as IEEE Xplore, ScienceDirect, and Web of Science to locate highly cited peer-reviewed studies among 785 articles. After implementing a two-phase article filtration process, 143 articles are categorized into review articles (15/143) and learning-based development articles (128/143) based on the type of machine learning used in development. Motivational topics are highlighted, and recommendations are provided to facilitate and expedite the development of 5G-RAN. This review offers a learning-based mapping, delineating the current state of 5G-RAN architectures (e.g., O-RAN, C-RAN, HCRAN, and F-RAN, among others) in terms of computing capabilities and resource availability. Additionally, the article identifies the current concepts of ML prediction (categorical vs. value) that are implemented and discusses areas for future enhancements regarding the goal of network intelligence.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104041"},"PeriodicalIF":7.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142433588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Android malware defense through a hybrid multi-modal approach 通过多模式混合方法防御安卓恶意软件
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-30 DOI: 10.1016/j.jnca.2024.104035
Asmitha K.A. , Vinod P. , Rafidha Rehiman K.A. , Neeraj Raveendran , Mauro Conti
The rapid proliferation of Android apps has given rise to a dark side, where increasingly sophisticated malware poses a formidable challenge for detection. To combat this evolving threat, we present an explainable hybrid multi-modal framework. This framework leverages the power of deep learning, with a novel model fusion technique, to illuminate the hidden characteristics of malicious apps. Our approach combines models (leveraging late fusion approach) trained on attributes derived from static and dynamic analysis, hence utilizing the unique strengths of each model. We thoroughly analyze individual feature categories, feature ensembles, and model fusion using traditional machine learning classifiers and deep neural networks across diverse datasets. Our hybrid fused model outperforms others, achieving an F1-score of 99.97% on CICMaldroid2020. We use SHAP (SHapley Additive exPlanations) and t-SNE (t-distributed Stochastic Neighbor Embedding) to further analyze and interpret the best-performing model. We highlight the efficacy of our architectural design through an ablation study, revealing that our approach consistently achieves over 99% detection accuracy across multiple deep learning models. This paves the way groundwork for substantial advancements in security and risk mitigation within interconnected Android OS environments.
安卓应用程序的迅速普及带来了黑暗的一面,日益复杂的恶意软件给检测工作带来了巨大的挑战。为了应对这种不断演变的威胁,我们提出了一种可解释的混合多模态框架。该框架利用深度学习的强大功能和新颖的模型融合技术来揭示恶意应用程序的隐藏特征。我们的方法结合了根据静态和动态分析得出的属性训练的模型(利用后期融合方法),从而利用了每个模型的独特优势。我们使用传统的机器学习分类器和深度神经网络,在不同的数据集上对单个特征类别、特征集合和模型融合进行了深入分析。我们的混合融合模型优于其他模型,在 CICMaldroid2020 上取得了 99.97% 的 F1 分数。我们使用 SHAP(SHapley Additive exPlanations)和 t-SNE(t-distributed Stochastic Neighbor Embedding)来进一步分析和解释表现最佳的模型。我们通过一项消融研究强调了我们的架构设计的功效,结果表明我们的方法在多个深度学习模型中始终保持 99% 以上的检测准确率。这为在互联的安卓操作系统环境中大幅提高安全性和降低风险奠定了基础。
{"title":"Android malware defense through a hybrid multi-modal approach","authors":"Asmitha K.A. ,&nbsp;Vinod P. ,&nbsp;Rafidha Rehiman K.A. ,&nbsp;Neeraj Raveendran ,&nbsp;Mauro Conti","doi":"10.1016/j.jnca.2024.104035","DOIUrl":"10.1016/j.jnca.2024.104035","url":null,"abstract":"<div><div>The rapid proliferation of Android apps has given rise to a dark side, where increasingly sophisticated malware poses a formidable challenge for detection. To combat this evolving threat, we present an explainable hybrid multi-modal framework. This framework leverages the power of deep learning, with a novel model fusion technique, to illuminate the hidden characteristics of malicious apps. Our approach combines models (leveraging late fusion approach) trained on attributes derived from static and dynamic analysis, hence utilizing the unique strengths of each model. We thoroughly analyze individual feature categories, feature ensembles, and model fusion using traditional machine learning classifiers and deep neural networks across diverse datasets. Our hybrid fused model outperforms others, achieving an F1-score of 99.97% on CICMaldroid2020. We use SHAP (SHapley Additive exPlanations) and t-SNE (t-distributed Stochastic Neighbor Embedding) to further analyze and interpret the best-performing model. We highlight the efficacy of our architectural design through an ablation study, revealing that our approach consistently achieves over 99% detection accuracy across multiple deep learning models. This paves the way groundwork for substantial advancements in security and risk mitigation within interconnected Android OS environments.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104035"},"PeriodicalIF":7.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142433589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance enhancement of artificial intelligence: A survey 人工智能的性能提升:调查
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-26 DOI: 10.1016/j.jnca.2024.104034
Moez Krichen , Mohamed S. Abdalzaher
The advent of machine learning (ML) and Artificial intelligence (AI) has brought about a significant transformation across multiple industries, as it has facilitated the automation of jobs, extraction of valuable insights from extensive datasets, and facilitation of sophisticated decision-making processes. Nevertheless, optimizing efficiency has become a critical research field due to AI systems’ increasing complexity and resource requirements. This paper provides an extensive examination of several techniques and methodologies aimed at improving the efficiency of ML and artificial intelligence. In this study, we investigate many areas of research about AI. These areas include algorithmic improvements, hardware acceleration techniques, data pretreatment methods, model compression approaches, distributed computing frameworks, energy-efficient strategies, fundamental concepts related to AI, AI efficiency evaluation, and formal methodologies. Furthermore, we engage in an examination of the obstacles and prospective avenues in this particular domain. This paper offers a deep analysis of many subjects to equip researchers and practitioners with sufficient strategies to enhance efficiency within ML and AI systems. More particularly, the paper provides an extensive analysis of efficiency-enhancing techniques across multiple dimensions: algorithmic advancements, hardware acceleration, data processing, model compression, distributed computing, and energy consumption.
机器学习(ML)和人工智能(AI)的出现为多个行业带来了重大变革,因为它促进了工作自动化,从大量数据集中提取有价值的见解,并推动了复杂的决策过程。然而,由于人工智能系统的复杂性和资源需求不断增加,优化效率已成为一个重要的研究领域。本文对旨在提高 ML 和人工智能效率的几种技术和方法进行了广泛研究。在这项研究中,我们调查了有关人工智能的多个研究领域。这些领域包括算法改进、硬件加速技术、数据预处理方法、模型压缩方法、分布式计算框架、节能策略、与人工智能相关的基本概念、人工智能效率评估和形式方法论。此外,我们还研究了这一特定领域的障碍和前景。本文对许多主题进行了深入分析,为研究人员和从业人员提供了充分的策略,以提高 ML 和 AI 系统的效率。更具体地说,本文从多个维度对提高效率的技术进行了广泛分析:算法进步、硬件加速、数据处理、模型压缩、分布式计算和能源消耗。
{"title":"Performance enhancement of artificial intelligence: A survey","authors":"Moez Krichen ,&nbsp;Mohamed S. Abdalzaher","doi":"10.1016/j.jnca.2024.104034","DOIUrl":"10.1016/j.jnca.2024.104034","url":null,"abstract":"<div><div>The advent of machine learning (ML) and Artificial intelligence (AI) has brought about a significant transformation across multiple industries, as it has facilitated the automation of jobs, extraction of valuable insights from extensive datasets, and facilitation of sophisticated decision-making processes. Nevertheless, optimizing efficiency has become a critical research field due to AI systems’ increasing complexity and resource requirements. This paper provides an extensive examination of several techniques and methodologies aimed at improving the efficiency of ML and artificial intelligence. In this study, we investigate many areas of research about AI. These areas include algorithmic improvements, hardware acceleration techniques, data pretreatment methods, model compression approaches, distributed computing frameworks, energy-efficient strategies, fundamental concepts related to AI, AI efficiency evaluation, and formal methodologies. Furthermore, we engage in an examination of the obstacles and prospective avenues in this particular domain. This paper offers a deep analysis of many subjects to equip researchers and practitioners with sufficient strategies to enhance efficiency within ML and AI systems. More particularly, the paper provides an extensive analysis of efficiency-enhancing techniques across multiple dimensions: algorithmic advancements, hardware acceleration, data processing, model compression, distributed computing, and energy consumption.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104034"},"PeriodicalIF":7.7,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142326905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing cold start delay in serverless computing using lightweight virtual machines 使用轻量级虚拟机减少无服务器计算中的冷启动延迟
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-24 DOI: 10.1016/j.jnca.2024.104030
Amirmohammad Karamzadeh, Alireza Shameli-Sendi
In recent years, serverless computing has gained considerable attention in academic, professional, and business circles. Unique features such as code development flexibility and the cost-efficient pay-as-you-go pricing model have led to predictions of widespread adoption of serverless services. Major players in the cloud computing sector, including industry giants like Amazon, Google, and Microsoft, have made significant advancements in the field of serverless services. However, cloud computing faces complex challenges, with two prominent ones being the latency caused by cold start instances and security vulnerabilities associated with container escapes. These challenges undermine the smooth execution of isolated functions, a concern amplified by technologies like Google gVisor and Kata Containers. While the integration of tools like lightweight virtual machines has alleviated concerns about container escape vulnerabilities, the primary issue remains the increased delay experienced during cold start instances in the execution of serverless functions. The purpose of this research is to propose an architecture that reduces cold start delay overhead by utilizing lightweight virtual machines within a commercial architecture, thereby achieving a setup that closely resembles real-world scenarios. This research employs supervised learning methodologies to predict function invocations by leveraging the execution patterns of other program functions. The goal is to proactively mitigate cold start scenarios by invoking the target function before actual user initiation, effectively transitioning from cold starts to warm starts. In this study, we compared our approach with two fixed and variable window strategies. Commercial platforms like Knative, OpenFaaS, and OpenWhisk typically employ a fixed 15-minute window during cold starts. In contrast to these platforms, our approach demonstrated a significant reduction in cold start incidents. Specifically, when calling a function 200 times with 5, 10, and 20 invocations within one hour, our approach achieved reductions in cold starts by 83.33%, 92.13%, and 90.90%, respectively. Compared to the variable window approach, which adjusts the window based on cold start values, our proposed approach was able to prevent 82.92%, 91.66%, and 90.56% of cold starts for the same scenario. These results highlight the effectiveness of our approach in significantly reducing cold starts, thereby enhancing the performance and responsiveness of serverless functions. Our method outperformed both fixed and variable window strategies, making it a valuable contribution to the field of serverless computing. Additionally, the implementation of pre-invocation strategies to convert cold starts into warm starts results in a substantial reduction in the execution time of functions within lightweight virtual machines.
近年来,无服务器计算在学术界、专业界和企业界获得了相当高的关注度。代码开发的灵活性和 "即用即付 "定价模式的成本效益等独特功能使人们预测无服务器服务将得到广泛采用。云计算领域的主要参与者,包括亚马逊、谷歌和微软等行业巨头,都在无服务器服务领域取得了重大进展。然而,云计算面临着复杂的挑战,其中两个突出的挑战是冷启动实例造成的延迟和与容器逃逸相关的安全漏洞。这些挑战破坏了孤立功能的顺利执行,而谷歌 gVisor 和 Kata Containers 等技术又加剧了这种担忧。虽然轻量级虚拟机等工具的集成减轻了人们对容器逃逸漏洞的担忧,但主要问题仍然是无服务器功能执行过程中冷启动实例的延迟增加。本研究的目的是提出一种架构,通过在商业架构中利用轻量级虚拟机来减少冷启动延迟开销,从而实现与现实世界场景非常相似的设置。本研究采用监督学习方法,通过利用其他程序函数的执行模式来预测函数调用。目标是在用户实际启动前调用目标函数,从而主动缓解冷启动情况,有效地从冷启动过渡到热启动。在这项研究中,我们将我们的方法与两种固定和可变窗口策略进行了比较。Knative、OpenFaaS 和 OpenWhisk 等商业平台通常在冷启动期间采用 15 分钟的固定窗口。与这些平台相比,我们的方法显著减少了冷启动事件。具体来说,当在一小时内调用一个函数 200 次,每次调用 5 次、10 次和 20 次时,我们的方法分别将冷启动减少了 83.33%、92.13% 和 90.90%。与根据冷启动值调整窗口的可变窗口方法相比,我们提出的方法能够在相同情况下防止 82.92%、91.66% 和 90.56% 的冷启动。这些结果凸显了我们的方法在大幅减少冷启动方面的有效性,从而提高了无服务器功能的性能和响应速度。我们的方法优于固定窗口策略和可变窗口策略,是对无服务器计算领域的宝贵贡献。此外,实施预分配策略将冷启动转换为热启动,可大幅缩短轻量级虚拟机中函数的执行时间。
{"title":"Reducing cold start delay in serverless computing using lightweight virtual machines","authors":"Amirmohammad Karamzadeh,&nbsp;Alireza Shameli-Sendi","doi":"10.1016/j.jnca.2024.104030","DOIUrl":"10.1016/j.jnca.2024.104030","url":null,"abstract":"<div><div>In recent years, serverless computing has gained considerable attention in academic, professional, and business circles. Unique features such as code development flexibility and the cost-efficient pay-as-you-go pricing model have led to predictions of widespread adoption of serverless services. Major players in the cloud computing sector, including industry giants like Amazon, Google, and Microsoft, have made significant advancements in the field of serverless services. However, cloud computing faces complex challenges, with two prominent ones being the latency caused by cold start instances and security vulnerabilities associated with container escapes. These challenges undermine the smooth execution of isolated functions, a concern amplified by technologies like Google gVisor and Kata Containers. While the integration of tools like lightweight virtual machines has alleviated concerns about container escape vulnerabilities, the primary issue remains the increased delay experienced during cold start instances in the execution of serverless functions. The purpose of this research is to propose an architecture that reduces cold start delay overhead by utilizing lightweight virtual machines within a commercial architecture, thereby achieving a setup that closely resembles real-world scenarios. This research employs supervised learning methodologies to predict function invocations by leveraging the execution patterns of other program functions. The goal is to proactively mitigate cold start scenarios by invoking the target function before actual user initiation, effectively transitioning from cold starts to warm starts. In this study, we compared our approach with two fixed and variable window strategies. Commercial platforms like Knative, OpenFaaS, and OpenWhisk typically employ a fixed 15-minute window during cold starts. In contrast to these platforms, our approach demonstrated a significant reduction in cold start incidents. Specifically, when calling a function 200 times with 5, 10, and 20 invocations within one hour, our approach achieved reductions in cold starts by 83.33%, 92.13%, and 90.90%, respectively. Compared to the variable window approach, which adjusts the window based on cold start values, our proposed approach was able to prevent 82.92%, 91.66%, and 90.56% of cold starts for the same scenario. These results highlight the effectiveness of our approach in significantly reducing cold starts, thereby enhancing the performance and responsiveness of serverless functions. Our method outperformed both fixed and variable window strategies, making it a valuable contribution to the field of serverless computing. Additionally, the implementation of pre-invocation strategies to convert cold starts into warm starts results in a substantial reduction in the execution time of functions within lightweight virtual machines.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104030"},"PeriodicalIF":7.7,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142419861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cooperative task assignment framework with minimum cooperation cost in crowdsourcing systems 众包系统中合作成本最小的合作任务分配框架
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-21 DOI: 10.1016/j.jnca.2024.104033
Bo Yin, Zeshu Ai, Jun Lu, Ying Feng
Crowdsourcing provides a new problem-solving paradigm that utilizes the intelligence of crowds to solve computer-hard problems. Task assignment is a foundation problem in crowdsourcing systems and applications. However, existing task assignment approaches often assume that workers operate independently. In reality, worker cooperation is necessary. In this paper, we address the cooperative task assignment (CTA) problem where a worker needs to pay a monetary cost to another worker in exchange for cooperation. Cooperative working also requires one task to be assigned to more than one worker to ensure the reliability of crowdsourcing services. We formalize the CTA problem with the goal of minimizing the total cooperation cost of all workers under the workload limitation of each worker. The challenge is that the individual cooperation cost that a worker pays for a specific task highly depends on the task distribution. This increases the difficulty of obtaining the assignment instance with a small cooperation cost. We prove that the CTA problem is NP-hard. We propose a two-stage cooperative task assignment framework that first assigns each task to one worker and then makes duplicate assignments. We also present solutions to address the dynamic scenarios. Extensive experimental results show that the proposed framework can effectively reduce the cooperation cost.
众包提供了一种新的问题解决模式,它利用人群的智慧来解决计算机难以解决的问题。任务分配是众包系统和应用的基础问题。然而,现有的任务分配方法通常假定工人是独立工作的。实际上,工人合作是必要的。在本文中,我们将讨论合作任务分配(CTA)问题,即一名工人需要向另一名工人支付金钱成本以换取合作。合作工作还需要将一项任务分配给多个工人,以确保众包服务的可靠性。我们将 CTA 问题形式化,目标是在每个工人的工作量限制下,使所有工人的总合作成本最小化。面临的挑战是,工人为特定任务支付的个人合作成本高度依赖于任务分配。这增加了获得合作成本较小的任务分配实例的难度。我们证明了 CTA 问题的 NP 难度。我们提出了一个两阶段合作任务分配框架,首先将每个任务分配给一个工人,然后进行重复分配。我们还提出了应对动态场景的解决方案。大量实验结果表明,所提出的框架能有效降低合作成本。
{"title":"A cooperative task assignment framework with minimum cooperation cost in crowdsourcing systems","authors":"Bo Yin,&nbsp;Zeshu Ai,&nbsp;Jun Lu,&nbsp;Ying Feng","doi":"10.1016/j.jnca.2024.104033","DOIUrl":"10.1016/j.jnca.2024.104033","url":null,"abstract":"<div><div>Crowdsourcing provides a new problem-solving paradigm that utilizes the intelligence of crowds to solve computer-hard problems. Task assignment is a foundation problem in crowdsourcing systems and applications. However, existing task assignment approaches often assume that workers operate independently. In reality, worker cooperation is necessary. In this paper, we address the cooperative task assignment (CTA) problem where a worker needs to pay a monetary cost to another worker in exchange for cooperation. Cooperative working also requires one task to be assigned to more than one worker to ensure the reliability of crowdsourcing services. We formalize the CTA problem with the goal of minimizing the total cooperation cost of all workers under the workload limitation of each worker. The challenge is that the individual cooperation cost that a worker pays for a specific task highly depends on the task distribution. This increases the difficulty of obtaining the assignment instance with a small cooperation cost. We prove that the CTA problem is NP-hard. We propose a two-stage cooperative task assignment framework that first assigns each task to one worker and then makes duplicate assignments. We also present solutions to address the dynamic scenarios. Extensive experimental results show that the proposed framework can effectively reduce the cooperation cost.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104033"},"PeriodicalIF":7.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1