首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
An online cost optimization approach for edge resource provisioning in cloud gaming 云游戏中边缘资源调配的在线成本优化方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-30 DOI: 10.1016/j.jnca.2024.104008
Guoqing Tian, Li Pan, Shijun Liu

Cloud gaming (CG), as an emergent computing paradigm, is revolutionizing the gaming industry. Currently, cloud gaming service providers (CGSPs) begin to integrate edge computing with cloud to provide services, with the aim of maximizing gaming service revenue while considering the costs incurred and the benefits generated. However, it is non-trivial to maximize gaming service revenue, as future requests are not known beforehand, and poor resource provisioning may result in exorbitant costs. In addition, the edge resource provisioning (ERP) problem in CG necessitates a trade-off between cost and inevitable queuing issues in CG systems. To address this issue, we propose ERPOL (ERP Online), a convenient and efficient approach to formulate ERP strategies for CGSPs, without requiring any future information. The performance of ERPOL has been theoretically validated and experimentally evaluated. Experiments driven by real-world traces show that it can achieve significant cost savings. The proposed approach has the potential to transform how CGSPs manage their infrastructure.

云游戏(CG)作为一种新兴的计算模式,正在给游戏产业带来一场革命。目前,云游戏服务提供商(CGSP)开始将边缘计算与云计算结合起来提供服务,目的是在考虑成本和收益的同时实现游戏服务收益最大化。然而,要实现游戏服务收益最大化并非易事,因为未来的请求是事先不可知的,资源配置不当可能导致成本过高。此外,计算机网络中的边缘资源配置(ERP)问题需要在成本和计算机网络系统中不可避免的排队问题之间进行权衡。为了解决这个问题,我们提出了ERPOL(ERP Online),这是一种方便、高效的方法,可以在不需要任何未来信息的情况下为 CGSP 制定ERP 策略。ERPOL的性能已经过理论验证和实验评估。由现实世界跟踪驱动的实验表明,该方法可显著节约成本。所提出的方法有可能改变 CGSP 管理其基础设施的方式。
{"title":"An online cost optimization approach for edge resource provisioning in cloud gaming","authors":"Guoqing Tian,&nbsp;Li Pan,&nbsp;Shijun Liu","doi":"10.1016/j.jnca.2024.104008","DOIUrl":"10.1016/j.jnca.2024.104008","url":null,"abstract":"<div><p>Cloud gaming (CG), as an emergent computing paradigm, is revolutionizing the gaming industry. Currently, cloud gaming service providers (CGSPs) begin to integrate edge computing with cloud to provide services, with the aim of maximizing gaming service revenue while considering the costs incurred and the benefits generated. However, it is non-trivial to maximize gaming service revenue, as future requests are not known beforehand, and poor resource provisioning may result in exorbitant costs. In addition, the edge resource provisioning (ERP) problem in CG necessitates a trade-off between cost and inevitable queuing issues in CG systems. To address this issue, we propose ERPOL (ERP Online), a convenient and efficient approach to formulate ERP strategies for CGSPs, without requiring any future information. The performance of ERPOL has been theoretically validated and experimentally evaluated. Experiments driven by real-world traces show that it can achieve significant cost savings. The proposed approach has the potential to transform how CGSPs manage their infrastructure.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104008"},"PeriodicalIF":7.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Credit risk prediction for small and micro enterprises based on federated transfer learning frozen network parameters 基于联合转移学习冻结网络参数的小微企业信贷风险预测
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-30 DOI: 10.1016/j.jnca.2024.104009
Xiaolei Yang, Zhixin Xia, Junhui Song, Yongshan Liu

To accelerate the convergence speed and improve the accuracy of the federated shared model, this paper proposes a Federated Transfer Learning method based on frozen network parameters. The article sets up frozen two, three, and four layers network parameters, 8 sets of experimental tasks, and two target users for comparative experiments on frozen network parameters, and uses homomorphic encryption based Federated Transfer Learning to achieve secret transfer of parameters, and the accuracy, convergence speed, and loss function values of the experiment were compared and analyzed. The experiment proved that the frozen three-layer network parameter model has the highest accuracy, with the average values of the two target users being 0.9165 and 0.9164; The convergence speed is also the most ideal, with fast convergence completed after 25 iterations. The training time for the two users is also the shortest, with 1732.0s and 1787.3s, respectively; The loss function value shows that the lowest value for User-II is 0.181, while User-III is 0.2061. Finally, the unlabeled and non-empty enterprise credit data is predicted, with 61.08% of users being low-risk users. This article achieves rapid convergence of the target network model by freezing source domain network parameters in a shared network, saving computational resources.

为了加快联盟共享模型的收敛速度,提高其准确性,本文提出了一种基于冻结网络参数的联盟转移学习方法。文章设置了冻结的二层、三层和四层网络参数、8组实验任务和两个目标用户,对冻结的网络参数进行对比实验,并利用基于同态加密的联邦传输学习实现参数的秘密传输,对实验的准确度、收敛速度和损失函数值进行了对比分析。实验证明,冻结三层网络参数模型的准确率最高,两个目标用户的平均值分别为 0.9165 和 0.9164;收敛速度也最为理想,迭代 25 次后即可完成快速收敛。两个用户的训练时间也是最短的,分别为 1732.0s 和 1787.3s;损失函数值显示,User-II 的最小值为 0.181,User-III 为 0.2061。最后,对未标记的非空企业信用数据进行预测,61.08%的用户为低风险用户。本文通过在共享网络中冻结源域网络参数,实现了目标网络模型的快速收敛,节省了计算资源。
{"title":"Credit risk prediction for small and micro enterprises based on federated transfer learning frozen network parameters","authors":"Xiaolei Yang,&nbsp;Zhixin Xia,&nbsp;Junhui Song,&nbsp;Yongshan Liu","doi":"10.1016/j.jnca.2024.104009","DOIUrl":"10.1016/j.jnca.2024.104009","url":null,"abstract":"<div><p>To accelerate the convergence speed and improve the accuracy of the federated shared model, this paper proposes a Federated Transfer Learning method based on frozen network parameters. The article sets up frozen two, three, and four layers network parameters, 8 sets of experimental tasks, and two target users for comparative experiments on frozen network parameters, and uses homomorphic encryption based Federated Transfer Learning to achieve secret transfer of parameters, and the accuracy, convergence speed, and loss function values of the experiment were compared and analyzed. The experiment proved that the frozen three-layer network parameter model has the highest accuracy, with the average values of the two target users being 0.9165 and 0.9164; The convergence speed is also the most ideal, with fast convergence completed after 25 iterations. The training time for the two users is also the shortest, with 1732.0s and 1787.3s, respectively; The loss function value shows that the lowest value for User-II is 0.181, while User-III is 0.2061. Finally, the unlabeled and non-empty enterprise credit data is predicted, with 61.08% of users being low-risk users. This article achieves rapid convergence of the target network model by freezing source domain network parameters in a shared network, saving computational resources.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104009"},"PeriodicalIF":7.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Striking the perfect balance: Multi-objective optimization for minimizing deployment cost and maximizing coverage with Harmony Search 实现完美平衡:利用和谐搜索实现部署成本最小化和覆盖范围最大化的多目标优化
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-29 DOI: 10.1016/j.jnca.2024.104006
Quang Truong Vu , Phuc Tan Nguyen , Thi Hanh Nguyen , Thi Thanh Binh Huynh , Van Chien Trinh , Mikael Gidlund

In the Internet of Things (IoT) era, wireless sensor networks play a critical role in communication systems. One of the most crucial problems in wireless sensor networks is the sensor deployment problem, which attempts to provide a strategy to place the sensors within the surveillance area so that two fundamental criteria of wireless sensor networks, coverage and connectivity, are guaranteed. In this paper, we look to solve the multi-objective deployment problem so that area coverage is maximized and the number of nodes used is minimized. Since Harmony Search is a simple yet suitable algorithm for our work, we propose Harmony Search algorithm along with various enhancement proposals, including heuristic initialization, random sampling of sensor types, weighted fitness evaluation, and using different components in the fitness function, to provide a solution to the problem of sensor deployment in a heterogeneous wireless sensor network where sensors have different sensing ranges. On top of that, the probabilistic sensing model is used to reflect how the sensors work realistically. We also provide the extension of our solution to 3D areas and propose a realistic 3D dataset to evaluate it. The simulation results show that the proposed algorithms solve the area coverage problem more efficiently than previous algorithms. Our best proposal demonstrates significant improvements in coverage ratio by 10.20% and cost saving by 27.65% compared to the best baseline in a large-scale evaluation.

在物联网(IoT)时代,无线传感器网络在通信系统中发挥着至关重要的作用。无线传感器网络中最关键的问题之一是传感器部署问题,它试图提供一种在监控区域内放置传感器的策略,从而保证无线传感器网络的两个基本标准--覆盖率和连接性。在本文中,我们希望解决多目标部署问题,使区域覆盖最大化,使用的节点数量最少化。由于和谐搜索(Harmony Search)是一种简单而又适合我们工作的算法,因此我们提出了和谐搜索算法以及各种改进建议,包括启发式初始化、传感器类型的随机抽样、加权适配性评估以及在适配函数中使用不同的组件,从而为异构无线传感器网络(传感器具有不同的感应范围)中的传感器部署问题提供一种解决方案。此外,我们还使用了概率传感模型来反映传感器的实际工作情况。我们还将解决方案扩展到了三维区域,并提出了一个现实的三维数据集来对其进行评估。仿真结果表明,与之前的算法相比,我们提出的算法能更有效地解决区域覆盖问题。在大规模评估中,与最佳基线相比,我们的最佳方案在覆盖率方面显著提高了 10.20%,在成本方面节省了 27.65%。
{"title":"Striking the perfect balance: Multi-objective optimization for minimizing deployment cost and maximizing coverage with Harmony Search","authors":"Quang Truong Vu ,&nbsp;Phuc Tan Nguyen ,&nbsp;Thi Hanh Nguyen ,&nbsp;Thi Thanh Binh Huynh ,&nbsp;Van Chien Trinh ,&nbsp;Mikael Gidlund","doi":"10.1016/j.jnca.2024.104006","DOIUrl":"10.1016/j.jnca.2024.104006","url":null,"abstract":"<div><p>In the Internet of Things (IoT) era, wireless sensor networks play a critical role in communication systems. One of the most crucial problems in wireless sensor networks is the sensor deployment problem, which attempts to provide a strategy to place the sensors within the surveillance area so that two fundamental criteria of wireless sensor networks, coverage and connectivity, are guaranteed. In this paper, we look to solve the multi-objective deployment problem so that area coverage is maximized and the number of nodes used is minimized. Since Harmony Search is a simple yet suitable algorithm for our work, we propose Harmony Search algorithm along with various enhancement proposals, including heuristic initialization, random sampling of sensor types, weighted fitness evaluation, and using different components in the fitness function, to provide a solution to the problem of sensor deployment in a heterogeneous wireless sensor network where sensors have different sensing ranges. On top of that, the probabilistic sensing model is used to reflect how the sensors work realistically. We also provide the extension of our solution to 3D areas and propose a realistic 3D dataset to evaluate it. The simulation results show that the proposed algorithms solve the area coverage problem more efficiently than previous algorithms. Our best proposal demonstrates significant improvements in coverage ratio by 10.20% and cost saving by 27.65% compared to the best baseline in a large-scale evaluation.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104006"},"PeriodicalIF":7.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolving techniques in cyber threat hunting: A systematic review 不断发展的网络威胁猎杀技术:系统回顾
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-23 DOI: 10.1016/j.jnca.2024.104004
Arash Mahboubi , Khanh Luong , Hamed Aboutorab , Hang Thanh Bui , Geoff Jarrad , Mohammed Bahutair , Seyit Camtepe , Ganna Pogrebna , Ejaz Ahmed , Bazara Barry , Hannah Gately

In the rapidly changing cybersecurity landscape, threat hunting has become a critical proactive defense against sophisticated cyber threats. While traditional security measures are essential, their reactive nature often falls short in countering malicious actors’ increasingly advanced tactics. This paper explores the crucial role of threat hunting, a systematic, analyst-driven process aimed at uncovering hidden threats lurking within an organization’s digital infrastructure before they escalate into major incidents. Despite its importance, the cybersecurity community grapples with several challenges, including the lack of standardized methodologies, the need for specialized expertise, and the integration of cutting-edge technologies like artificial intelligence (AI) for predictive threat identification. To tackle these challenges, this survey paper offers a comprehensive overview of current threat hunting practices, emphasizing the integration of AI-driven models for proactive threat prediction. Our research explores critical questions regarding the effectiveness of various threat hunting processes and the incorporation of advanced techniques such as augmented methodologies and machine learning. Our approach involves a systematic review of existing practices, including frameworks from industry leaders like IBM and CrowdStrike. We also explore resources for intelligence ontologies and automation tools. The background section clarifies the distinction between threat hunting and anomaly detection, emphasizing systematic processes crucial for effective threat hunting. We formulate hypotheses based on hidden states and observations, examine the interplay between anomaly detection and threat hunting, and introduce iterative detection methodologies and playbooks for enhanced threat detection. Our review encompasses supervised and unsupervised machine learning approaches, reasoning techniques, graph-based and rule-based methods, as well as other innovative strategies. We identify key challenges in the field, including the scarcity of labeled data, imbalanced datasets, the need for integrating multiple data sources, the rapid evolution of adversarial techniques, and the limited availability of human expertise and data intelligence. The discussion highlights the transformative impact of artificial intelligence on both threat hunting and cybercrime, reinforcing the importance of robust hypothesis development. This paper contributes a detailed analysis of the current state and future directions of threat hunting, offering actionable insights for researchers and practitioners to enhance threat detection and mitigation strategies in the ever-evolving cybersecurity landscape.

在瞬息万变的网络安全环境中,威胁猎取已成为应对复杂网络威胁的重要主动防御手段。虽然传统的安全措施必不可少,但它们的被动性往往无法应对恶意行为者日益先进的战术。本文探讨了威胁猎取的关键作用,威胁猎取是一个系统化的、由分析师驱动的过程,旨在发现潜伏在组织数字基础设施中的隐蔽威胁,避免其升级为重大事件。尽管其重要性不言而喻,但网络安全界仍在努力应对一些挑战,包括缺乏标准化方法、对专业知识的需求,以及整合人工智能(AI)等尖端技术以进行预测性威胁识别。为了应对这些挑战,本调查报告全面概述了当前的威胁猎捕实践,强调了人工智能驱动模型在主动威胁预测中的整合。我们的研究探讨了与各种威胁猎取流程的有效性以及增强方法和机器学习等先进技术的整合有关的关键问题。我们的方法涉及对现有实践的系统回顾,包括来自 IBM 和 CrowdStrike 等行业领导者的框架。我们还探索了情报本体和自动化工具方面的资源。背景部分阐明了威胁猎取和异常检测之间的区别,强调了对有效猎取威胁至关重要的系统流程。我们根据隐藏状态和观察结果提出假设,研究异常检测和威胁猎捕之间的相互作用,并介绍用于增强威胁检测的迭代检测方法和流程。我们的综述涵盖了有监督和无监督机器学习方法、推理技术、基于图形和规则的方法以及其他创新策略。我们指出了该领域面临的主要挑战,包括标注数据稀缺、数据集不平衡、需要整合多种数据源、对抗技术的快速发展以及人类专业知识和数据智能的有限性。讨论强调了人工智能对威胁猎捕和网络犯罪的变革性影响,强化了稳健假设开发的重要性。本文详细分析了威胁猎取的现状和未来方向,为研究人员和从业人员在不断变化的网络安全环境中加强威胁检测和缓解策略提供了可行的见解。
{"title":"Evolving techniques in cyber threat hunting: A systematic review","authors":"Arash Mahboubi ,&nbsp;Khanh Luong ,&nbsp;Hamed Aboutorab ,&nbsp;Hang Thanh Bui ,&nbsp;Geoff Jarrad ,&nbsp;Mohammed Bahutair ,&nbsp;Seyit Camtepe ,&nbsp;Ganna Pogrebna ,&nbsp;Ejaz Ahmed ,&nbsp;Bazara Barry ,&nbsp;Hannah Gately","doi":"10.1016/j.jnca.2024.104004","DOIUrl":"10.1016/j.jnca.2024.104004","url":null,"abstract":"<div><p>In the rapidly changing cybersecurity landscape, threat hunting has become a critical proactive defense against sophisticated cyber threats. While traditional security measures are essential, their reactive nature often falls short in countering malicious actors’ increasingly advanced tactics. This paper explores the crucial role of threat hunting, a systematic, analyst-driven process aimed at uncovering hidden threats lurking within an organization’s digital infrastructure before they escalate into major incidents. Despite its importance, the cybersecurity community grapples with several challenges, including the lack of standardized methodologies, the need for specialized expertise, and the integration of cutting-edge technologies like artificial intelligence (AI) for predictive threat identification. To tackle these challenges, this survey paper offers a comprehensive overview of current threat hunting practices, emphasizing the integration of AI-driven models for proactive threat prediction. Our research explores critical questions regarding the effectiveness of various threat hunting processes and the incorporation of advanced techniques such as augmented methodologies and machine learning. Our approach involves a systematic review of existing practices, including frameworks from industry leaders like IBM and CrowdStrike. We also explore resources for intelligence ontologies and automation tools. The background section clarifies the distinction between threat hunting and anomaly detection, emphasizing systematic processes crucial for effective threat hunting. We formulate hypotheses based on hidden states and observations, examine the interplay between anomaly detection and threat hunting, and introduce iterative detection methodologies and playbooks for enhanced threat detection. Our review encompasses supervised and unsupervised machine learning approaches, reasoning techniques, graph-based and rule-based methods, as well as other innovative strategies. We identify key challenges in the field, including the scarcity of labeled data, imbalanced datasets, the need for integrating multiple data sources, the rapid evolution of adversarial techniques, and the limited availability of human expertise and data intelligence. The discussion highlights the transformative impact of artificial intelligence on both threat hunting and cybercrime, reinforcing the importance of robust hypothesis development. This paper contributes a detailed analysis of the current state and future directions of threat hunting, offering actionable insights for researchers and practitioners to enhance threat detection and mitigation strategies in the ever-evolving cybersecurity landscape.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104004"},"PeriodicalIF":7.7,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804524001814/pdfft?md5=7fb543744ca72ceac22267ab8ec36898&pid=1-s2.0-S1084804524001814-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy efficient multi-user task offloading through active RIS with hybrid TDMA-NOMA transmission 通过混合 TDMA-NOMA 传输的主动 RIS 实现高能效多用户任务卸载
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-22 DOI: 10.1016/j.jnca.2024.104005
Baoshan Lu , Junli Fang , Junxiu Liu , Xuemin Hong

In this paper, we address the challenge of minimizing system energy consumption for task offloading within non-line-of-sight (NLoS) mobile edge computing (MEC) environments. Our approach integrates an active reconfigurable intelligent surface (RIS) and employs a hybrid transmission scheme combining time division multiple access (TDMA) and non-orthogonal multiple access (NOMA) to enhance the quality of service (QoS) for user task offloading. The formulation of this problem as a non-convex optimization issue presents significant challenges due to its inherent complexity. To overcome this, we introduce an innovative method termed element refinement-based differential evolution (ERBDE). Initially, through rigorous theoretical analysis, we optimally determine the allocation of local computation resources, computation resources at the base station (BS), and transmit power of users, while maintaining fixed values for the offloading ratio, amplification factor, phase of the reflecting element, and the transmission period. Subsequently, we employ the differential evolution (DE) algorithm to iteratively fine-tune the offloading ratio, amplification factor, phase of the reflecting element, and transmission period towards near-optimal configurations. Our simulation results demonstrate that the implementation of active RIS-supported task offloading utilizing the hybrid TDMA-NOMA scheme results in an average system energy consumption reduction of 80.3%.

在本文中,我们探讨了在非视距(NLoS)移动边缘计算(MEC)环境中如何最大限度地降低任务卸载的系统能耗这一难题。我们的方法集成了有源可重构智能表面(RIS),并采用了结合时分多址(TDMA)和非正交多址(NOMA)的混合传输方案,以提高用户任务卸载的服务质量(QoS)。由于其固有的复杂性,将这一问题表述为非凸优化问题带来了巨大挑战。为了克服这一难题,我们引入了一种创新方法,即基于元素细化的微分演化(ERBDE)。首先,通过严格的理论分析,我们优化了本地计算资源、基站(BS)计算资源和用户发射功率的分配,同时保持了卸载率、放大系数、反射元件相位和传输周期的固定值。随后,我们采用微分演化(DE)算法对卸载率、放大系数、反射元件相位和传输周期进行迭代微调,以接近最佳配置。我们的仿真结果表明,利用混合 TDMA-NOMA 方案实施主动 RIS 支持的任务卸载可使系统能耗平均降低 80.3%。
{"title":"Energy efficient multi-user task offloading through active RIS with hybrid TDMA-NOMA transmission","authors":"Baoshan Lu ,&nbsp;Junli Fang ,&nbsp;Junxiu Liu ,&nbsp;Xuemin Hong","doi":"10.1016/j.jnca.2024.104005","DOIUrl":"10.1016/j.jnca.2024.104005","url":null,"abstract":"<div><p>In this paper, we address the challenge of minimizing system energy consumption for task offloading within non-line-of-sight (NLoS) mobile edge computing (MEC) environments. Our approach integrates an active reconfigurable intelligent surface (RIS) and employs a hybrid transmission scheme combining time division multiple access (TDMA) and non-orthogonal multiple access (NOMA) to enhance the quality of service (QoS) for user task offloading. The formulation of this problem as a non-convex optimization issue presents significant challenges due to its inherent complexity. To overcome this, we introduce an innovative method termed element refinement-based differential evolution (ERBDE). Initially, through rigorous theoretical analysis, we optimally determine the allocation of local computation resources, computation resources at the base station (BS), and transmit power of users, while maintaining fixed values for the offloading ratio, amplification factor, phase of the reflecting element, and the transmission period. Subsequently, we employ the differential evolution (DE) algorithm to iteratively fine-tune the offloading ratio, amplification factor, phase of the reflecting element, and transmission period towards near-optimal configurations. Our simulation results demonstrate that the implementation of active RIS-supported task offloading utilizing the hybrid TDMA-NOMA scheme results in an average system energy consumption reduction of 80.3%.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104005"},"PeriodicalIF":7.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An expandable and cost-effective data center network 可扩展且经济高效的数据中心网络
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-22 DOI: 10.1016/j.jnca.2024.104001
Mengjie Lv, Xuanli Liu, Hui Dong, Weibei Fan

With the rapid growth of data volume, the escalating complexity of data businesses, and the increasing reliance on the Internet for daily life and production, the scale of data centers is constantly expanding. The data center network (DCN) is a bridge connecting large-scale servers in data centers for large-scale distributed computing. How to build a DCN structure that is flexible and cost-effective, while maintaining its topological properties unchanged during network expansion has become a challenging issue. In this paper, we propose an expandable and cost-effective DCN, namely HHCube, which is based on the half hypercube structure. Further, we analyze some characteristics of HHCube, including connectivity, diameter, and bisection bandwidth of the HHCube. We also design an efficient algorithm to find the shortest path between any two distinct nodes and present a fault-tolerant routing scheme to obtain a fault-tolerant path between any two distinct fault-free nodes in HHCube. Meanwhile, we present two local diagnosis algorithms to determine the status of nodes in HHCube under the PMC model and MM* model, respectively. Our results demonstrate that despite the presence of up to 25% faulty nodes in HHCube, both algorithms achieve a correct diagnosis rate exceeding 90%. Finally, we compare HHCube with state-of-the-art DCNs including Fat-Tree, DCell, BCube, Ficonn, and HSDC, and the experimental results indicate that the HHCube is an excellent candidate for constructing expandable and cost-effective DCNs.

随着数据量的快速增长、数据业务复杂性的不断升级,以及人们日常生活和生产对互联网的依赖程度不断提高,数据中心的规模也在不断扩大。数据中心网络(DCN)是连接数据中心大型服务器的桥梁,可实现大规模分布式计算。如何构建灵活、经济高效的 DCN 结构,同时在网络扩展过程中保持拓扑特性不变,成为一个具有挑战性的问题。本文提出了一种可扩展且经济高效的 DCN,即基于半超立方结构的 HHCube。此外,我们还分析了 HHCube 的一些特性,包括 HHCube 的连通性、直径和分段带宽。我们还设计了一种高效算法来寻找任意两个不同节点之间的最短路径,并提出了一种容错路由方案来获取 HHCube 中任意两个不同无故障节点之间的容错路径。同时,我们提出了两种局部诊断算法,分别用于确定 PMC 模型和 MM* 模型下 HHCube 中节点的状态。结果表明,尽管 HHCube 中存在多达 25% 的故障节点,两种算法的诊断正确率都超过了 90%。最后,我们将HHCube与最先进的DCN(包括Fat-Tree、DCell、BCube、Ficonn和HSDC)进行了比较,实验结果表明,HHCube是构建可扩展且经济高效的DCN的最佳候选方案。
{"title":"An expandable and cost-effective data center network","authors":"Mengjie Lv,&nbsp;Xuanli Liu,&nbsp;Hui Dong,&nbsp;Weibei Fan","doi":"10.1016/j.jnca.2024.104001","DOIUrl":"10.1016/j.jnca.2024.104001","url":null,"abstract":"<div><p>With the rapid growth of data volume, the escalating complexity of data businesses, and the increasing reliance on the Internet for daily life and production, the scale of data centers is constantly expanding. The data center network (DCN) is a bridge connecting large-scale servers in data centers for large-scale distributed computing. How to build a DCN structure that is flexible and cost-effective, while maintaining its topological properties unchanged during network expansion has become a challenging issue. In this paper, we propose an expandable and cost-effective DCN, namely HHCube, which is based on the half hypercube structure. Further, we analyze some characteristics of HHCube, including connectivity, diameter, and bisection bandwidth of the HHCube. We also design an efficient algorithm to find the shortest path between any two distinct nodes and present a fault-tolerant routing scheme to obtain a fault-tolerant path between any two distinct fault-free nodes in HHCube. Meanwhile, we present two local diagnosis algorithms to determine the status of nodes in HHCube under the PMC model and MM* model, respectively. Our results demonstrate that despite the presence of up to 25% faulty nodes in HHCube, both algorithms achieve a correct diagnosis rate exceeding 90%. Finally, we compare HHCube with state-of-the-art DCNs including Fat-Tree, DCell, BCube, Ficonn, and HSDC, and the experimental results indicate that the HHCube is an excellent candidate for constructing expandable and cost-effective DCNs.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104001"},"PeriodicalIF":7.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142089137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zebra: A cluster-aware blockchain consensus algorithm 斑马:集群感知区块链共识算法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-20 DOI: 10.1016/j.jnca.2024.104003
Ji Wan , Kai Hu , Jie Li , Yichen Guo , Hao Su , Shenzhang Li , Yafei Ye

The Consensus algorithm is the core of the permissioned blockchain, it directly affects the performance and scalability of the system. Performance is limited by the computing power and network bandwidth of a single leader node. Most existing blockchain systems adopt mesh or star topology. Blockchain performance decreases rapidly as the number of nodes increases. To solve this problem, we first design the n-k cluster tree and corresponding generation algorithm, which supports rapid reconfiguration of nodes. Then we propose the Zebra consensus algorithm, which is a cluster tree-based consensus algorithm. Compared to the PBFT, it has higher throughput and supports more nodes under the same hardware conditions. However, the tree network topology enhances scalability while also increasing latency among nodes. To reduce transaction latency, we designed the Pipeline-Zebra consensus algorithm that further improves the performance of blockchain systems in a tree network topology through parallel message propagation and block validation. The message complexity of the algorithm is O(n). Experimental results show that the performance of the algorithm proposed in this paper can reach 2.25 times that of the PBFT algorithm, and it supports four times the number of nodes under the same hardware.

共识算法是许可区块链的核心,它直接影响系统的性能和可扩展性。性能受限于单个领导节点的计算能力和网络带宽。现有的区块链系统大多采用网状或星形拓扑结构。随着节点数量的增加,区块链的性能会迅速降低。为了解决这个问题,我们首先设计了 n-k 集群树和相应的生成算法,支持节点的快速重新配置。然后,我们提出了斑马共识算法,这是一种基于聚类树的共识算法。与 PBFT 相比,它具有更高的吞吐量,并能在相同硬件条件下支持更多节点。然而,树状网络拓扑结构在增强可扩展性的同时,也增加了节点间的延迟。为了减少交易延迟,我们设计了流水线-斑马共识算法,通过并行消息传播和区块验证,进一步提高了树状网络拓扑中区块链系统的性能。该算法的消息复杂度为 O(n)。实验结果表明,本文提出的算法性能可达 PBFT 算法的 2.25 倍,在相同硬件条件下支持四倍节点数。
{"title":"Zebra: A cluster-aware blockchain consensus algorithm","authors":"Ji Wan ,&nbsp;Kai Hu ,&nbsp;Jie Li ,&nbsp;Yichen Guo ,&nbsp;Hao Su ,&nbsp;Shenzhang Li ,&nbsp;Yafei Ye","doi":"10.1016/j.jnca.2024.104003","DOIUrl":"10.1016/j.jnca.2024.104003","url":null,"abstract":"<div><p>The Consensus algorithm is the core of the permissioned blockchain, it directly affects the performance and scalability of the system. Performance is limited by the computing power and network bandwidth of a single leader node. Most existing blockchain systems adopt mesh or star topology. Blockchain performance decreases rapidly as the number of nodes increases. To solve this problem, we first design the <em>n-k</em> cluster tree and corresponding generation algorithm, which supports rapid reconfiguration of nodes. Then we propose the <em>Zebra</em> consensus algorithm, which is a cluster tree-based consensus algorithm. Compared to the PBFT, it has higher throughput and supports more nodes under the same hardware conditions. However, the tree network topology enhances scalability while also increasing latency among nodes. To reduce transaction latency, we designed the <em>Pipeline-Zebra</em> consensus algorithm that further improves the performance of blockchain systems in a tree network topology through parallel message propagation and block validation. The message complexity of the algorithm is <em>O(n)</em>. Experimental results show that the performance of the algorithm proposed in this paper can reach 2.25 times that of the PBFT algorithm, and it supports four times the number of nodes under the same hardware.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104003"},"PeriodicalIF":7.7,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network quality prediction in a designated area using GPS data 利用 GPS 数据预测指定区域的网络质量
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-18 DOI: 10.1016/j.jnca.2024.104002
Onur Sahin , Vanlin Sathya

This study introduces a groundbreaking method for predicting network quality in LTE and 5G environments using only GPS data, focusing on pinpointing specific locations within a designated area to determine network quality as either good or poor. By leveraging machine learning algorithms, we have successfully demonstrated that geographical location can be a key indicator of network performance. Our research involved initially classifying network quality using traditional signal strength metrics and then shifting to rely exclusively on GPS coordinates for prediction. Employing a variety of classifiers, including Decision Tree, Random Forest, Gradient Boosting and K-Nearest Neighbors, we uncovered notable correlations between location data and network quality. This methodology provides network operators with a cost-effective and efficient tool for identifying and addressing network quality issues based on geographic insights. Additionally, we explored the potential implications of our study in various use cases, including healthcare, education, and urban industrialization, highlighting its versatility across different sectors. Our findings pave the way for innovative network management strategies, especially critical in the contexts of both LTE and the rapidly evolving landscape of 5G technology.

本研究介绍了一种仅使用 GPS 数据预测 LTE 和 5G 环境中网络质量的开创性方法,重点是精确定位指定区域内的特定位置,以确定网络质量的好坏。通过利用机器学习算法,我们成功证明了地理位置可以成为网络性能的关键指标。我们的研究包括最初使用传统的信号强度指标对网络质量进行分类,然后转向完全依赖 GPS 坐标进行预测。通过使用决策树、随机森林、梯度提升和 K-近邻等多种分类器,我们发现了位置数据与网络质量之间的显著相关性。这种方法为网络运营商提供了一种经济高效的工具,用于根据地理洞察力识别和解决网络质量问题。此外,我们还探讨了我们的研究在医疗保健、教育和城市工业化等各种使用案例中的潜在影响,突出了它在不同领域的通用性。我们的研究结果为创新的网络管理策略铺平了道路,尤其是在 LTE 和快速发展的 5G 技术背景下至关重要。
{"title":"Network quality prediction in a designated area using GPS data","authors":"Onur Sahin ,&nbsp;Vanlin Sathya","doi":"10.1016/j.jnca.2024.104002","DOIUrl":"10.1016/j.jnca.2024.104002","url":null,"abstract":"<div><p>This study introduces a groundbreaking method for predicting network quality in LTE and 5G environments using only GPS data, focusing on pinpointing specific locations within a designated area to determine network quality as either good or poor. By leveraging machine learning algorithms, we have successfully demonstrated that geographical location can be a key indicator of network performance. Our research involved initially classifying network quality using traditional signal strength metrics and then shifting to rely exclusively on GPS coordinates for prediction. Employing a variety of classifiers, including Decision Tree, Random Forest, Gradient Boosting and K-Nearest Neighbors, we uncovered notable correlations between location data and network quality. This methodology provides network operators with a cost-effective and efficient tool for identifying and addressing network quality issues based on geographic insights. Additionally, we explored the potential implications of our study in various use cases, including healthcare, education, and urban industrialization, highlighting its versatility across different sectors. Our findings pave the way for innovative network management strategies, especially critical in the contexts of both LTE and the rapidly evolving landscape of 5G technology.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 104002"},"PeriodicalIF":7.7,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid Bi-level management framework for caching and communication in Edge-AI enabled IoT 用于支持边缘人工智能的物联网缓存和通信的混合双层管理框架
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-17 DOI: 10.1016/j.jnca.2024.104000
Samane Sharif, Mohammad Hossein Yaghmaee Moghaddam, Seyed Amin Hosseini Seno

The proliferation of IoT devices has led to a surge in network traffic, resulting in higher energy usage and response delays. In-network caching has emerged as a viable solution to address this issue. However, caching IoT data faces two key challenges: the transient nature of IoT content and the unknown spatiotemporal content popularity. Additionally, the use of a global view on dynamic IoT networks is problematic due to the high communication overhead involved. To tackle these challenges, this paper presents an adaptive management approach that jointly optimizes caching and communication in IoT networks using a novel bi-level control method called BC3. The approach employs two types of controllers: a global ILP-based optimal controller for long-term timeslots and local learning-based controllers for short-term timeslots. The long-term controller periodically establishes a global cache policy for the network and sends specific cache rules to each edge server. The local controller at each edge server solves the joint problem of bandwidth allocation and cache adaptation using deep reinforcement learning (DRL) technique. The main objective is to minimize energy consumption and system response time with utilizing the global and local observations. Experimental results demonstrate that the proposed approach increases cache hit rate by approximately 12% and uses 11% less energy compared to the other methods. Increasing the cache hit rate can lead to a reduction in about 17% in response time for user requests. Our bi-level control approach offers a promising solution for leveraging the network's global visibility while balancing communication overhead (as energy consumption) against system performance. Additionally, the proposed method has the lowest cache eviction, around 19% lower than the lowest eviction of the other comparison methods. The eviction metric is a metric to evaluate the effectiveness of adaptive caching approach designed for transient data.

物联网设备的激增导致网络流量激增,从而造成更高的能耗和响应延迟。网络内缓存已成为解决这一问题的可行方案。然而,缓存物联网数据面临两大挑战:物联网内容的瞬时性和未知的时空内容流行性。此外,由于涉及高通信开销,在动态物联网网络上使用全局视图存在问题。为了应对这些挑战,本文提出了一种自适应管理方法,利用一种名为 BC3 的新型双层控制方法,联合优化物联网网络中的缓存和通信。该方法采用两种类型的控制器:基于 ILP 的全局最优控制器(用于长期时隙)和基于学习的本地控制器(用于短期时隙)。长期控制器定期为网络建立全局高速缓存策略,并向每个边缘服务器发送特定的高速缓存规则。每个边缘服务器上的本地控制器利用深度强化学习(DRL)技术解决带宽分配和高速缓存适应的联合问题。主要目标是利用全局和本地观测结果,最大限度地减少能耗和系统响应时间。实验结果表明,与其他方法相比,所提出的方法将缓存命中率提高了约 12%,能耗降低了 11%。提高缓存命中率可使用户请求的响应时间缩短约 17%。我们的双层控制方法为利用网络的全局可见性提供了一个很有前景的解决方案,同时还能平衡通信开销(作为能耗)与系统性能之间的关系。此外,所提出的方法具有最低的缓存驱逐率,比其他比较方法的最低驱逐率低约 19%。驱逐指标是评估针对瞬态数据设计的自适应缓存方法有效性的指标。
{"title":"A hybrid Bi-level management framework for caching and communication in Edge-AI enabled IoT","authors":"Samane Sharif,&nbsp;Mohammad Hossein Yaghmaee Moghaddam,&nbsp;Seyed Amin Hosseini Seno","doi":"10.1016/j.jnca.2024.104000","DOIUrl":"10.1016/j.jnca.2024.104000","url":null,"abstract":"<div><p>The proliferation of IoT devices has led to a surge in network traffic, resulting in higher energy usage and response delays. In-network caching has emerged as a viable solution to address this issue. However, caching IoT data faces two key challenges: the transient nature of IoT content and the unknown spatiotemporal content popularity. Additionally, the use of a global view on dynamic IoT networks is problematic due to the high communication overhead involved. To tackle these challenges, this paper presents an adaptive management approach that jointly optimizes caching and communication in IoT networks using a novel bi-level control method called BC3. The approach employs two types of controllers: a global ILP-based optimal controller for long-term timeslots and local learning-based controllers for short-term timeslots. The long-term controller periodically establishes a global cache policy for the network and sends specific cache rules to each edge server. The local controller at each edge server solves the joint problem of bandwidth allocation and cache adaptation using deep reinforcement learning (DRL) technique. The main objective is to minimize energy consumption and system response time with utilizing the global and local observations. Experimental results demonstrate that the proposed approach increases cache hit rate by approximately 12% and uses 11% less energy compared to the other methods. Increasing the cache hit rate can lead to a reduction in about 17% in response time for user requests. Our bi-level control approach offers a promising solution for leveraging the network's global visibility while balancing communication overhead (as energy consumption) against system performance. Additionally, the proposed method has the lowest cache eviction, around 19% lower than the lowest eviction of the other comparison methods. The eviction metric is a metric to evaluate the effectiveness of adaptive caching approach designed for transient data.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104000"},"PeriodicalIF":7.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A blockchain transaction mechanism in the delay tolerant network 延迟容忍网络中的区块链交易机制
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-14 DOI: 10.1016/j.jnca.2024.103998
Lingling Zi, Xin Cong

Current blockchain systems have high requirements on network connection and data transmission rate, for example, nodes have to receive the latest blocks in time to update the blockchain, nodes have to immediately broadcast the generated block to other nodes for consensus, which restricts the blockchain to run only on real-time connection networks, but the existence of delay tolerant networks poses a great challenge to the deployment of blockchain systems. To address this challenge, a novel blockchain transaction mechanism is proposed. First, the block structure is modified by adding a flag, and on this basis, the definition of the extrachain is proposed. Secondly, based on the blockchain transaction process, transaction verification and consensus algorithms on the extrachain are presented. Thirdly, both the extrachain selection algorithm and appending algorithm are proposed, so that the extrachain can be appended to the blockchain fairly and safely. Finally, an extrachain transmission scheme is presented to broadcast the blocks generated in the delayed network to the normal network. Theoretical analysis and simulation experiments further illustrate the efficiency of the proposed mechanism.

当前的区块链系统对网络连接和数据传输速率有很高的要求,比如节点要及时接收最新的区块才能更新区块链,节点要立即将生成的区块广播给其他节点以达成共识,这就限制了区块链只能运行在实时连接的网络上,但延迟容忍网络的存在给区块链系统的部署带来了很大的挑战。为解决这一难题,本文提出了一种新颖的区块链交易机制。首先,通过添加标志对区块结构进行修改,并在此基础上提出了链外的定义。其次,基于区块链交易过程,提出了链外的交易验证和共识算法。第三,提出了链外选择算法和追加算法,使链外可以公平、安全地追加到区块链中。最后,提出了一种链外传输方案,将延迟网络中生成的区块广播到正常网络中。理论分析和模拟实验进一步说明了所提机制的效率。
{"title":"A blockchain transaction mechanism in the delay tolerant network","authors":"Lingling Zi,&nbsp;Xin Cong","doi":"10.1016/j.jnca.2024.103998","DOIUrl":"10.1016/j.jnca.2024.103998","url":null,"abstract":"<div><p>Current blockchain systems have high requirements on network connection and data transmission rate, for example, nodes have to receive the latest blocks in time to update the blockchain, nodes have to immediately broadcast the generated block to other nodes for consensus, which restricts the blockchain to run only on real-time connection networks, but the existence of delay tolerant networks poses a great challenge to the deployment of blockchain systems. To address this challenge, a novel blockchain transaction mechanism is proposed. First, the block structure is modified by adding a flag, and on this basis, the definition of the extrachain is proposed. Secondly, based on the blockchain transaction process, transaction verification and consensus algorithms on the extrachain are presented. Thirdly, both the extrachain selection algorithm and appending algorithm are proposed, so that the extrachain can be appended to the blockchain fairly and safely. Finally, an extrachain transmission scheme is presented to broadcast the blocks generated in the delayed network to the normal network. Theoretical analysis and simulation experiments further illustrate the efficiency of the proposed mechanism.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 103998"},"PeriodicalIF":7.7,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141981088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1