首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
Label-aware learning to enhance unsupervised cross-domain rumor detection 标签感知学习增强无监督跨域谣言检测
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-09 DOI: 10.1016/j.jnca.2024.104084
Hongyan Ran, Xiaohong Li, Zhichang Zhang
Recently, massive research has achieved significant development in improving the performance of rumor detection. However, identifying rumors in an invisible domain is still an elusive challenge. To address this issue, we propose an unsupervised cross-domain rumor detection model that enhances contrastive learning and cross-attention by label-aware learning to alleviate the domain shift. The model performs cross-domain feature alignment and enforces target samples to align with the corresponding prototypes of a given source domain. Moreover, we use a cross-attention mechanism on a pair of source data and target data with the same labels to learn domain-invariant representations. Because the samples in a domain pair tend to express similar semantic patterns, especially on the people’s attitudes (e.g., supporting or denying) towards the same category of rumors. In addition, we add a label-aware learning module as an enhancement component to learn the correlations between labels and instances during training and generate a better label distribution to replace the original one-hot label vector to guide the model training. At the same time, we use the label representation learned by the label learning module to guide the production of pseudo-label for the target samples. We conduct experiments on four groups of cross-domain datasets and show that our proposed model achieves state-of-the-art performance.
最近,大量研究在提高谣言检测性能方面取得了重大进展。然而,在无形领域中识别谣言仍然是一个难以捉摸的挑战。为解决这一问题,我们提出了一种无监督跨域谣言检测模型,该模型通过标签感知学习来增强对比学习和交叉注意,从而缓解域转移问题。该模型执行跨域特征对齐,并强制目标样本与给定源域的相应原型对齐。此外,我们在具有相同标签的源数据和目标数据对上使用交叉关注机制来学习域不变表征。因为领域对中的样本往往会表达相似的语义模式,尤其是人们对同一类谣言的态度(如支持或否认)。此外,我们还添加了标签感知学习模块作为增强组件,以便在训练过程中学习标签与实例之间的相关性,并生成更好的标签分布来替代原始的单点标签向量,从而指导模型训练。同时,我们利用标签学习模块学习到的标签表示来指导目标样本伪标签的生成。我们在四组跨领域数据集上进行了实验,结果表明我们提出的模型达到了最先进的性能。
{"title":"Label-aware learning to enhance unsupervised cross-domain rumor detection","authors":"Hongyan Ran, Xiaohong Li, Zhichang Zhang","doi":"10.1016/j.jnca.2024.104084","DOIUrl":"https://doi.org/10.1016/j.jnca.2024.104084","url":null,"abstract":"Recently, massive research has achieved significant development in improving the performance of rumor detection. However, identifying rumors in an invisible domain is still an elusive challenge. To address this issue, we propose an unsupervised cross-domain rumor detection model that enhances contrastive learning and cross-attention by label-aware learning to alleviate the domain shift. The model performs cross-domain feature alignment and enforces target samples to align with the corresponding prototypes of a given source domain. Moreover, we use a cross-attention mechanism on a pair of source data and target data with the same labels to learn domain-invariant representations. Because the samples in a domain pair tend to express similar semantic patterns, especially on the people’s attitudes (e.g., supporting or denying) towards the same category of rumors. In addition, we add a label-aware learning module as an enhancement component to learn the correlations between labels and instances during training and generate a better label distribution to replace the original one-hot label vector to guide the model training. At the same time, we use the label representation learned by the label learning module to guide the production of pseudo-label for the target samples. We conduct experiments on four groups of cross-domain datasets and show that our proposed model achieves state-of-the-art performance.","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"117 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive plane-wise review of DDoS attacks in SDN: Leveraging detection and mitigation through machine learning and deep learning SDN中DDoS攻击的全面全面回顾:通过机器学习和深度学习利用检测和缓解
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-09 DOI: 10.1016/j.jnca.2024.104081
Dhruv Kalambe, Divyansh Sharma, Pushkar Kadam, Shivangi Surati
The traditional architecture of networks in Software Defined Networking (SDN) is divided into three distinct planes to incorporate intelligence into networks. However, this structure has also introduced security threats and challenges across these planes, including the widely recognized Distributed Denial of Service (DDoS) attack. Therefore, it is essential to predict such attacks and their variants at different planes in SDN to maintain seamless network operations. Apart from network based and flow analysis based solutions to detect the attacks; machine learning and deep learning based prediction and mitigation approaches are also explored by the researchers and applied at different planes of software defined networking. Consequently, a detailed analysis of DDoS attacks and a review that explores DDoS attacks in SDN along with their learning based prediction/mitigation strategies are required to be studied and presented in detail. This paper primarily aims to investigate and analyze DDoS attacks on each plane of SDN and to study as well as compare machine learning, advanced federated learning and deep learning approaches to predict these attacks. The real world case studies are also explored to compare the analysis. In addition, low-rate DDoS attacks and novel research directions are discussed that can further be utilized by SDN experts and researchers to confront the effects by DDoS attacks on SDN.
软件定义网络(SDN)中的传统网络架构被划分为三个不同的平面,以将智能融入网络。然而,这种结构也在这些平面上引入了安全威胁和挑战,包括广泛认可的分布式拒绝服务(DDoS)攻击。因此,在SDN中预测此类攻击及其在不同平面的变体,以保持网络的无缝运行至关重要。除了基于网络和基于流量分析的攻击检测方案;研究人员还探索了基于机器学习和深度学习的预测和缓解方法,并将其应用于软件定义网络的不同层面。因此,需要对DDoS攻击进行详细分析,并对SDN中的DDoS攻击及其基于学习的预测/缓解策略进行研究和详细介绍。本文的主要目的是调查和分析SDN各平面上的DDoS攻击,并研究和比较机器学习、高级联邦学习和深度学习方法来预测这些攻击。本文还探讨了现实世界的案例研究,以比较分析结果。此外,还讨论了低速率DDoS攻击和新的研究方向,SDN专家和研究人员可以进一步利用这些研究方向来应对DDoS攻击对SDN的影响。
{"title":"A comprehensive plane-wise review of DDoS attacks in SDN: Leveraging detection and mitigation through machine learning and deep learning","authors":"Dhruv Kalambe, Divyansh Sharma, Pushkar Kadam, Shivangi Surati","doi":"10.1016/j.jnca.2024.104081","DOIUrl":"https://doi.org/10.1016/j.jnca.2024.104081","url":null,"abstract":"The traditional architecture of networks in Software Defined Networking (SDN) is divided into three distinct planes to incorporate intelligence into networks. However, this structure has also introduced security threats and challenges across these planes, including the widely recognized Distributed Denial of Service (DDoS) attack. Therefore, it is essential to predict such attacks and their variants at different planes in SDN to maintain seamless network operations. Apart from network based and flow analysis based solutions to detect the attacks; machine learning and deep learning based prediction and mitigation approaches are also explored by the researchers and applied at different planes of software defined networking. Consequently, a detailed analysis of DDoS attacks and a review that explores DDoS attacks in SDN along with their learning based prediction/mitigation strategies are required to be studied and presented in detail. This paper primarily aims to investigate and analyze DDoS attacks on each plane of SDN and to study as well as compare machine learning, advanced federated learning and deep learning approaches to predict these attacks. The real world case studies are also explored to compare the analysis. In addition, low-rate DDoS attacks and novel research directions are discussed that can further be utilized by SDN experts and researchers to confront the effects by DDoS attacks on SDN.","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"252 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MDQ: A QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in SDN MDQ:面向 SDN 多路径路由的、意识到 QoS-拥塞的深度强化学习方法
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-09 DOI: 10.1016/j.jnca.2024.104082
Lizeth Patricia Aguirre Sanchez, Yao Shen, Minyi Guo
The challenge of link overutilization in networking persists, prompting the development of load-balancing methods such as multi-path strategies and flow rerouting. However, traditional rule-based heuristics struggle to adapt dynamically to network changes. This leads to complex models and lengthy convergence times, unsuitable for diverse QoS demands, particularly in time-sensitive applications. Existing routing approaches often result in specific types of traffic overloading links or general congestion, prolonged convergence delays, and scalability challenges. To tackle these issues, we propose a QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in Software-Defined Networking (MDQ). Leveraging Deep Reinforcement Learning, MDQ intelligently selects optimal multi-paths and allocates traffic based on flow needs. We design a multi-objective function using a combination of link and queue metrics to establish an efficient routing policy. Moreover, we integrate a congestion severity index into the learning process and incorporate a traffic classification phase to handle mice-elephant flows, ensuring that diverse class-of-service requirements are adequately addressed. Through an RYU-Docker-based Openflow framework integrating a Live QoS Monitor, DNC Classifier, and Online Routing, results demonstrate a 19%–22% reduction in delay compared to state-of-the-art algorithms, exhibiting robust reliability across diverse scenarios of network dynamics.
网络中链路利用率过高的挑战依然存在,这促使人们开发了多路径策略和流量重路由等负载平衡方法。然而,传统的基于规则的启发式方法难以动态适应网络变化。这导致了复杂的模型和漫长的收敛时间,不适合各种 QoS 需求,尤其是对时间敏感的应用。现有的路由选择方法通常会导致特定类型的流量超载链路或普遍拥塞、收敛延迟过长以及可扩展性挑战。为了解决这些问题,我们为软件定义网络(MDQ)中的多路径路由提出了一种考虑到 QoS-拥塞的深度强化学习方法。利用深度强化学习,MDQ 可以智能地选择最佳多路径,并根据流量需求分配流量。我们结合链路和队列指标设计了一个多目标函数,以建立高效的路由策略。此外,我们还将拥塞严重程度指数纳入学习过程,并结合流量分类阶段来处理小象流量,确保充分满足不同的服务等级要求。通过一个基于 RYU-Docker 的 Openflow 框架,该框架集成了实时 QoS 监控、DNC 分类器和在线路由,结果表明与最先进的算法相比,延迟降低了 19%-22%,在网络动态的各种情况下都表现出了强大的可靠性。
{"title":"MDQ: A QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in SDN","authors":"Lizeth Patricia Aguirre Sanchez, Yao Shen, Minyi Guo","doi":"10.1016/j.jnca.2024.104082","DOIUrl":"https://doi.org/10.1016/j.jnca.2024.104082","url":null,"abstract":"The challenge of link overutilization in networking persists, prompting the development of load-balancing methods such as multi-path strategies and flow rerouting. However, traditional rule-based heuristics struggle to adapt dynamically to network changes. This leads to complex models and lengthy convergence times, unsuitable for diverse QoS demands, particularly in time-sensitive applications. Existing routing approaches often result in specific types of traffic overloading links or general congestion, prolonged convergence delays, and scalability challenges. To tackle these issues, we propose a QoS-Congestion Aware Deep Reinforcement Learning Approach for Multi-Path Routing in Software-Defined Networking (MDQ). Leveraging Deep Reinforcement Learning, MDQ intelligently selects optimal multi-paths and allocates traffic based on flow needs. We design a multi-objective function using a combination of link and queue metrics to establish an efficient routing policy. Moreover, we integrate a congestion severity index into the learning process and incorporate a traffic classification phase to handle mice-elephant flows, ensuring that diverse class-of-service requirements are adequately addressed. Through an RYU-Docker-based Openflow framework integrating a Live QoS Monitor, DNC Classifier, and Online Routing, results demonstrate a 19%–22% reduction in delay compared to state-of-the-art algorithms, exhibiting robust reliability across diverse scenarios of network dynamics.","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"37 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Caching or re-computing: Online cost optimization for running big data tasks in IaaS clouds 缓存或重新计算:在IaaS云中运行大数据任务的在线成本优化
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-09 DOI: 10.1016/j.jnca.2024.104080
Xiankun Fu, Li Pan, Shijun Liu
High computing power and large storage capacity are necessary for running big data tasks, which leads to high infrastructure costs. Infrastructure-as-a-Service (IaaS) clouds can provide configuration environments and computing resources needed for running big data tasks, while saving users from expensive software and hardware infrastructure investments. Many studies show that the cost of computation can be reduced by caching intermediate results and reusing them instead of repeating computations. However, the storage cost incurred by caching a large number of intermediate results over a long period of time may exceed the cost of computation, ultimately leading to an increase in total cost instead. For making optimal caching decisions, future usage profiles for big data tasks are needed, but it is generally very hard to predict them precisely. In this paper, to address this problem, we propose two practical online algorithms, one deterministic and the other randomized, which can determine whether to cache intermediate results to reduce the total cost of big data tasks without requiring any future information. We prove theoretically that the competitive ratio of the proposed deterministic (randomized) algorithm is min(21ηδ,2ηβ) (resp., ee1). Using real-world Wikipedia data as well as synthetic datasets, we verify the effectiveness of our proposed algorithms through a large number of experiments based on the price of Alibaba’s public IaaS cloud products.
运行大数据任务需要高计算能力和大存储容量,这导致基础设施成本高昂。基础设施即服务(IaaS)云可以提供运行大数据任务所需的配置环境和计算资源,同时为用户节省昂贵的软件和硬件基础设施投资。许多研究表明,通过缓存中间结果并重复使用它们而不是重复计算,可以降低计算成本。然而,长期缓存大量中间结果所产生的存储成本可能会超过计算成本,最终导致总成本反而增加。要做出最佳缓存决策,需要大数据任务的未来使用情况,但通常很难精确预测。为了解决这个问题,我们在本文中提出了两种实用的在线算法,一种是确定性算法,另一种是随机算法,它们可以在不需要任何未来信息的情况下决定是否缓存中间结果,从而降低大数据任务的总成本。我们从理论上证明了所提出的确定性(随机)算法的竞争比为 min(2-1-ηδ,2-ηβ) (resp., ee-1)。我们使用真实世界的维基百科数据和合成数据集,以阿里巴巴公共 IaaS 云产品的价格为基础,通过大量实验验证了我们提出的算法的有效性。
{"title":"Caching or re-computing: Online cost optimization for running big data tasks in IaaS clouds","authors":"Xiankun Fu, Li Pan, Shijun Liu","doi":"10.1016/j.jnca.2024.104080","DOIUrl":"https://doi.org/10.1016/j.jnca.2024.104080","url":null,"abstract":"High computing power and large storage capacity are necessary for running big data tasks, which leads to high infrastructure costs. Infrastructure-as-a-Service (IaaS) clouds can provide configuration environments and computing resources needed for running big data tasks, while saving users from expensive software and hardware infrastructure investments. Many studies show that the cost of computation can be reduced by caching intermediate results and reusing them instead of repeating computations. However, the storage cost incurred by caching a large number of intermediate results over a long period of time may exceed the cost of computation, ultimately leading to an increase in total cost instead. For making optimal caching decisions, future usage profiles for big data tasks are needed, but it is generally very hard to predict them precisely. In this paper, to address this problem, we propose two practical online algorithms, one deterministic and the other randomized, which can determine whether to cache intermediate results to reduce the total cost of big data tasks without requiring any future information. We prove theoretically that the competitive ratio of the proposed deterministic (randomized) algorithm is <mml:math altimg=\"si1.svg\" display=\"inline\"><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mo>−</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>−</mml:mo><mml:mi>η</mml:mi></mml:mrow><mml:mrow><mml:mi>δ</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>−</mml:mo><mml:mfrac><mml:mrow><mml:mi>η</mml:mi></mml:mrow><mml:mrow><mml:mi>β</mml:mi></mml:mrow></mml:mfrac><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math> (resp., <mml:math altimg=\"si2.svg\" display=\"inline\"><mml:mfrac><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>e</mml:mi><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mfrac></mml:math>). Using real-world Wikipedia data as well as synthetic datasets, we verify the effectiveness of our proposed algorithms through a large number of experiments based on the price of Alibaba’s public IaaS cloud products.","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"30 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complex networks for Smart environments management 用于智能环境管理的复杂网络
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-05 DOI: 10.1016/j.jnca.2024.104088
Annamaria Ficara, Hocine Cherifi, Xiaoyang Liu, Luiz Fernando Bittencourt, Maria Fazio
{"title":"Complex networks for Smart environments management","authors":"Annamaria Ficara, Hocine Cherifi, Xiaoyang Liu, Luiz Fernando Bittencourt, Maria Fazio","doi":"10.1016/j.jnca.2024.104088","DOIUrl":"https://doi.org/10.1016/j.jnca.2024.104088","url":null,"abstract":"","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"43 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on energy efficient medium access control for acoustic wireless communication networks in underwater environments 水下环境声无线通信网络节能介质接入控制研究
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-12-04 DOI: 10.1016/j.jnca.2024.104079
Walid K. Hasan, Iftekhar Ahmad, Daryoush Habibi, Quoc Viet Phung, Mohammad Al-Fawa'reh, Kazi Yasin Islam, Ruba Zaheer, Haitham Khaled
Underwater communication plays a crucial role in monitoring the aquatic environment on Earth. Due to their unique characteristics, underwater acoustic channels present unique challenges including lengthy signal transmission delays, limited data transfer bandwidth, variable signal quality, and fluctuating channel conditions. Furthermore, the reliance on battery power for most Underwater Wireless Acoustic Networks (UWAN) devices, coupled with the challenges associated with battery replacement or recharging, intensifies the challenges. Underwater acoustic communications are heavily constrained by available resources (e.g., very limited bandwidth, and limited energy storage). Consequently, the role of medium access control (MAC) protocol which distributes available resources among nodes is critical in maintaining a reliable underwater communication system. This study presents an extensive review of current research in MAC for UWAN. This study presents an extensive review of current research in MAC for UWAN. The paper explores the unique challenges and characteristics of UWAN, which are critical for the MAC protocol design. Subsequently, a diverse range of energy-efficient MAC techniques are categorized and reviewed. Potential future research avenues in energy-efficient MAC protocols are discussed, with a particular emphasis on the challenges to enable the broader implementation of the Green Internet of Underwater Things (GIoUT).
水下通信在监测地球上的水生环境中起着至关重要的作用。由于其独特的特性,水声信道面临着独特的挑战,包括长时间的信号传输延迟、有限的数据传输带宽、可变的信号质量和波动的信道条件。此外,大多数水下无线声学网络(UWAN)设备对电池供电的依赖,加上电池更换或充电的挑战,加剧了这一挑战。水声通信受到可用资源的严重限制(例如,非常有限的带宽和有限的能量存储)。因此,介质访问控制(MAC)协议在节点间分配可用资源的作用对于保证水下通信系统的可靠性至关重要。本研究对当前UWAN的MAC研究进行了广泛的回顾。本研究对当前UWAN的MAC研究进行了广泛的回顾。本文探讨了UWAN的独特挑战和特点,这对MAC协议的设计至关重要。随后,对各种节能MAC技术进行了分类和回顾。讨论了节能MAC协议的潜在未来研究途径,特别强调了实现绿色水下物联网(GIoUT)更广泛实施的挑战。
{"title":"A survey on energy efficient medium access control for acoustic wireless communication networks in underwater environments","authors":"Walid K. Hasan, Iftekhar Ahmad, Daryoush Habibi, Quoc Viet Phung, Mohammad Al-Fawa'reh, Kazi Yasin Islam, Ruba Zaheer, Haitham Khaled","doi":"10.1016/j.jnca.2024.104079","DOIUrl":"https://doi.org/10.1016/j.jnca.2024.104079","url":null,"abstract":"Underwater communication plays a crucial role in monitoring the aquatic environment on Earth. Due to their unique characteristics, underwater acoustic channels present unique challenges including lengthy signal transmission delays, limited data transfer bandwidth, variable signal quality, and fluctuating channel conditions. Furthermore, the reliance on battery power for most Underwater Wireless Acoustic Networks (UWAN) devices, coupled with the challenges associated with battery replacement or recharging, intensifies the challenges. Underwater acoustic communications are heavily constrained by available resources (e.g., very limited bandwidth, and limited energy storage). Consequently, the role of medium access control (MAC) protocol which distributes available resources among nodes is critical in maintaining a reliable underwater communication system. This study presents an extensive review of current research in MAC for UWAN. This study presents an extensive review of current research in MAC for UWAN. The paper explores the unique challenges and characteristics of UWAN, which are critical for the MAC protocol design. Subsequently, a diverse range of energy-efficient MAC techniques are categorized and reviewed. Potential future research avenues in energy-efficient MAC protocols are discussed, with a particular emphasis on the challenges to enable the broader implementation of the Green Internet of Underwater Things (GIoUT).","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"2 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing 5G network slicing with DRL: Balancing eMBB, URLLC, and mMTC with OMA, NOMA, and RSMA 用DRL优化5G网络切片:用OMA、NOMA和RSMA平衡eMBB、URLLC和mMTC
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-28 DOI: 10.1016/j.jnca.2024.104068
Silvestre Malta , Pedro Pinto , Manuel Fernández-Veiga
The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.
第五代(5G)网络的出现引入了网络切片策略作为一种范式转变,使提供具有不同服务质量(QoS)要求的服务成为可能。第5代新无线电(5G NR)标准符合增强型移动宽带(eMBB)、超可靠低延迟通信(URLLC)和大规模机器类型通信(mMTC)用例,这些用例需要动态适应网络切片,以满足不同的流量需求。这种动态适应是提高5G网络效率的重大挑战,也是重大机遇。本文提出了一种深度强化学习(DRL)智能体,该智能体根据5G用例的流量需求,在带URLLC的eMBB和带mMTC的eMBB两种场景下,对5G无线网络切片进行动态资源分配。DRL代理对OMA (Orthogonal Multiple Access)、NOMA (Non-Orthogonal Multiple Access)、RSMA (Rate Splitting Multiple Access)等不同的译码方案的性能进行评估,并在不同的网络条件下应用最佳的译码方案。已经测试了DRL代理在带URLLC的eMBB场景中最大限度地提高和速率,在带mMTC的eMBB场景中最大限度地提高成功解码设备的数量,这两种情况都采用了设备数量、功率增益和分配频率数量的不同组合。结果表明,在两种情况下,DRL代理动态选择最佳解码方案,并且在最大化和率和解码设备之间具有84%到100%的效率。
{"title":"Optimizing 5G network slicing with DRL: Balancing eMBB, URLLC, and mMTC with OMA, NOMA, and RSMA","authors":"Silvestre Malta ,&nbsp;Pedro Pinto ,&nbsp;Manuel Fernández-Veiga","doi":"10.1016/j.jnca.2024.104068","DOIUrl":"10.1016/j.jnca.2024.104068","url":null,"abstract":"<div><div>The advent of 5th Generation (5G) networks has introduced the strategy of network slicing as a paradigm shift, enabling the provision of services with distinct Quality of Service (QoS) requirements. The 5th Generation New Radio (5G NR) standard complies with the use cases Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), which demand a dynamic adaptation of network slicing to meet the diverse traffic needs. This dynamic adaptation presents both a critical challenge and a significant opportunity to improve 5G network efficiency. This paper proposes a Deep Reinforcement Learning (DRL) agent that performs dynamic resource allocation in 5G wireless network slicing according to traffic requirements of the 5G use cases within two scenarios: eMBB with URLLC and eMBB with mMTC. The DRL agent evaluates the performance of different decoding schemes such as Orthogonal Multiple Access (OMA), Non-Orthogonal Multiple Access (NOMA), and Rate Splitting Multiple Access (RSMA) and applies the best decoding scheme in these scenarios under different network conditions. The DRL agent has been tested to maximize the sum rate in scenario eMBB with URLLC and to maximize the number of successfully decoded devices in scenario eMBB with mMTC, both with different combinations of number of devices, power gains and number of allocated frequencies. The results show that the DRL agent dynamically chooses the best decoding scheme and presents an efficiency in maximizing the sum rate and the decoded devices between 84% and 100% for both scenarios evaluated.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104068"},"PeriodicalIF":7.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QuIDS: A Quantum Support Vector machine-based Intrusion Detection System for IoT networks QuIDS:基于量子支持向量机的物联网入侵检测系统
IF 8.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-26 DOI: 10.1016/j.jnca.2024.104072
Rakesh Kumar, Mayank Swarnkar
With the increasing popularity of IoT, there has been a noticeable surge in security breaches associated with vulnerable IoT devices. To identify and counter such attacks. Intrusion Detection Systems (IDS) are deployed. However, these IoT devices use device-specific application layer protocols like MQTT and CoAP, which pose an additional burden to the traditional IDS. Several Machine Learning (ML) and Deep Learning (DL) based IDS are developed to detect malicious IoT network traffic. However, in recent times, a variety of IoT devices have been available on the market, resulting in the frequent installation and uninstallation of IoT devices based on users’ needs. Moreover, ML and DL-based IDS must train with sufficient device-specific attack training data for each IoT device, consuming a noticeable amount of training time. To solve these problems, we propose QuIDS, which utilizes a Quantum Support Vector Classifier to classify attacks in an IoT network. QuIDS requires very little training data compared to ML or DL to train and accurately identify attacks in the IoT network. QuIDS extracts eight flow-level features from IoT network traffic and utilizes them over four quantum bits for training. We experimented with QuIDS on two publicly available datasets and found the average recall rate, precision, and f1-score of the QuIDS as 91.1%, 84.3%, and 86.4%, respectively. Moreover, comparing QuIDS with the ML and DL methods, we found that QuIDS outperformed by 37.7%, 24.4.6%, and 36.9% more average recall and precision rates than the ML and DL methods, respectively.
随着物联网的日益普及,与易受攻击的物联网设备相关的安全漏洞明显激增。识别和反击这类攻击。已部署入侵检测系统。然而,这些物联网设备使用特定于设备的应用层协议,如MQTT和CoAP,这给传统的IDS带来了额外的负担。开发了几种基于机器学习(ML)和深度学习(DL)的IDS来检测恶意物联网网络流量。然而,近年来,市场上出现了各种各样的物联网设备,导致根据用户需求频繁安装和卸载物联网设备。此外,基于ML和dl的IDS必须为每个物联网设备使用足够的设备特定攻击训练数据进行训练,这消耗了大量的训练时间。为了解决这些问题,我们提出了QuIDS,它利用量子支持向量分类器对物联网网络中的攻击进行分类。与ML或DL相比,QuIDS需要很少的训练数据来训练和准确识别物联网网络中的攻击。QuIDS从物联网网络流量中提取8个流级特征,并利用它们在4个量子比特上进行训练。我们在两个公开的数据集上对QuIDS进行了实验,发现QuIDS的平均召回率、准确率和f1得分分别为91.1%、84.3%和86.4%。此外,将QuIDS与ML和DL方法进行比较,我们发现QuIDS的平均查全率和查准率分别比ML和DL方法高37.7%、24.4.6%和36.9%。
{"title":"QuIDS: A Quantum Support Vector machine-based Intrusion Detection System for IoT networks","authors":"Rakesh Kumar, Mayank Swarnkar","doi":"10.1016/j.jnca.2024.104072","DOIUrl":"https://doi.org/10.1016/j.jnca.2024.104072","url":null,"abstract":"With the increasing popularity of IoT, there has been a noticeable surge in security breaches associated with vulnerable IoT devices. To identify and counter such attacks. Intrusion Detection Systems (IDS) are deployed. However, these IoT devices use device-specific application layer protocols like MQTT and CoAP, which pose an additional burden to the traditional IDS. Several Machine Learning (ML) and Deep Learning (DL) based IDS are developed to detect malicious IoT network traffic. However, in recent times, a variety of IoT devices have been available on the market, resulting in the frequent installation and uninstallation of IoT devices based on users’ needs. Moreover, ML and DL-based IDS must train with sufficient device-specific attack training data for each IoT device, consuming a noticeable amount of training time. To solve these problems, we propose QuIDS, which utilizes a Quantum Support Vector Classifier to classify attacks in an IoT network. QuIDS requires very little training data compared to ML or DL to train and accurately identify attacks in the IoT network. QuIDS extracts eight flow-level features from IoT network traffic and utilizes them over four quantum bits for training. We experimented with QuIDS on two publicly available datasets and found the average recall rate, precision, and f1-score of the QuIDS as 91.1%, 84.3%, and 86.4%, respectively. Moreover, comparing QuIDS with the ML and DL methods, we found that QuIDS outperformed by 37.7%, 24.4.6%, and 36.9% more average recall and precision rates than the ML and DL methods, respectively.","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"60 1","pages":""},"PeriodicalIF":8.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gwydion: Efficient auto-scaling for complex containerized applications in Kubernetes through Reinforcement Learning Gwydion:通过强化学习为Kubernetes中的复杂容器化应用程序提供高效的自动扩展
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-26 DOI: 10.1016/j.jnca.2024.104067
José Santos , Efstratios Reppas , Tim Wauters , Bruno Volckaert , Filip De Turck
Containers have reshaped application deployment and life-cycle management in recent cloud platforms. The paradigm shift from large monolithic applications to complex graphs of loosely-coupled microservices aims to increase deployment flexibility and operational efficiency. However, efficient allocation and scaling of microservice applications is challenging due to their intricate inter-dependencies. Existing works do not consider microservice dependencies, which could lead to the application’s performance degradation when service demand increases. As dependencies increase, communication between microservices becomes more complex and frequent, leading to slower response times and higher resource consumption, especially during high demand. In addition, performance issues in one microservice can also trigger a ripple effect across dependent services, exacerbating the performance degradation across the entire application. This paper studies the impact of microservice inter-dependencies in auto-scaling by proposing Gwydion, a novel framework that enables different auto-scaling goals through Reinforcement Learning (RL) algorithms. Gwydion has been developed based on the OpenAI Gym library and customized for the popular Kubernetes (K8s) platform to bridge the gap between RL and auto-scaling research by training RL algorithms on real cloud environments for two opposing reward strategies: cost-aware and latency-aware. Gwydion focuses on improving resource usage and reducing the application’s response time by considering microservice inter-dependencies when scaling horizontally. Experiments with microservice benchmark applications, such as Redis Cluster (RC) and Online Boutique (OB), show that RL agents can reduce deployment costs and the application’s response time compared to default scaling mechanisms, achieving up to 50% lower latency while avoiding performance degradation. For RC, cost-aware algorithms can reduce the number of deployed pods (2 to 4), resulting in slightly higher latency (300μs to 6 ms) but lower resource consumption. For OB, all RL algorithms exhibit a notable response time improvement by considering all microservices in the observation space, enabling the sequential triggering of actions across different deployments. This leads to nearly 30% cost savings while maintaining consistently lower latency throughout the experiment. Gwydion aims to advance auto-scaling research in a rapidly evolving dynamic cloud environment.
在最近的云平台中,容器重塑了应用程序部署和生命周期管理。从大型单片应用程序到松散耦合微服务的复杂图的范式转变旨在提高部署灵活性和操作效率。然而,由于微服务应用程序之间错综复杂的相互依赖关系,有效的分配和扩展是具有挑战性的。现有的工作没有考虑微服务依赖,当服务需求增加时,这可能导致应用程序的性能下降。随着依赖关系的增加,微服务之间的通信变得更加复杂和频繁,从而导致更慢的响应时间和更高的资源消耗,特别是在高需求期间。此外,一个微服务中的性能问题还可能引发跨依赖服务的连锁反应,从而加剧整个应用程序的性能下降。本文通过提出Gwydion来研究微服务相互依赖对自动扩展的影响,Gwydion是一个通过强化学习(RL)算法实现不同自动扩展目标的新框架。Gwydion是基于OpenAI Gym库开发的,并为流行的Kubernetes (K8s)平台定制的,通过在真实的云环境中训练RL算法,为两种相反的奖励策略(成本感知和延迟感知)架起了RL和自动扩展研究之间的桥梁。在横向扩展时,Gwydion通过考虑微服务的相互依赖,专注于改善资源的使用,减少应用程序的响应时间。对微服务基准应用程序(如Redis Cluster (RC)和Online Boutique (OB))的实验表明,与默认扩展机制相比,RL代理可以降低部署成本和应用程序的响应时间,在避免性能下降的同时实现高达50%的延迟降低。对于RC,成本感知算法可以减少部署的pod数量(2到4),从而导致稍高的延迟(300μs到6 ms),但降低资源消耗。对于OB,所有RL算法通过考虑观察空间中的所有微服务,支持跨不同部署的顺序触发操作,显示出显著的响应时间改进。这可以节省近30%的成本,同时在整个实验过程中始终保持较低的延迟。Gwydion旨在在快速发展的动态云环境中推进自动伸缩研究。
{"title":"Gwydion: Efficient auto-scaling for complex containerized applications in Kubernetes through Reinforcement Learning","authors":"José Santos ,&nbsp;Efstratios Reppas ,&nbsp;Tim Wauters ,&nbsp;Bruno Volckaert ,&nbsp;Filip De Turck","doi":"10.1016/j.jnca.2024.104067","DOIUrl":"10.1016/j.jnca.2024.104067","url":null,"abstract":"<div><div>Containers have reshaped application deployment and life-cycle management in recent cloud platforms. The paradigm shift from large monolithic applications to complex graphs of loosely-coupled microservices aims to increase deployment flexibility and operational efficiency. However, efficient allocation and scaling of microservice applications is challenging due to their intricate inter-dependencies. Existing works do not consider microservice dependencies, which could lead to the application’s performance degradation when service demand increases. As dependencies increase, communication between microservices becomes more complex and frequent, leading to slower response times and higher resource consumption, especially during high demand. In addition, performance issues in one microservice can also trigger a ripple effect across dependent services, exacerbating the performance degradation across the entire application. This paper studies the impact of microservice inter-dependencies in auto-scaling by proposing <em>Gwydion</em>, a novel framework that enables different auto-scaling goals through Reinforcement Learning (RL) algorithms. <em>Gwydion</em> has been developed based on the OpenAI Gym library and customized for the popular Kubernetes (K8s) platform to bridge the gap between RL and auto-scaling research by training RL algorithms on real cloud environments for two opposing reward strategies: cost-aware and latency-aware. <em>Gwydion</em> focuses on improving resource usage and reducing the application’s response time by considering microservice inter-dependencies when scaling horizontally. Experiments with microservice benchmark applications, such as Redis Cluster (RC) and Online Boutique (OB), show that RL agents can reduce deployment costs and the application’s response time compared to default scaling mechanisms, achieving up to 50% lower latency while avoiding performance degradation. For RC, cost-aware algorithms can reduce the number of deployed pods (2 to 4), resulting in slightly higher latency (<span><math><mrow><mn>300</mn><mspace></mspace><mi>μ</mi><mi>s</mi></mrow></math></span> to 6 ms) but lower resource consumption. For OB, all RL algorithms exhibit a notable response time improvement by considering all microservices in the observation space, enabling the sequential triggering of actions across different deployments. This leads to nearly 30% cost savings while maintaining consistently lower latency throughout the experiment. Gwydion aims to advance auto-scaling research in a rapidly evolving dynamic cloud environment.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104067"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handover Authenticated Key Exchange for Multi-access Edge Computing 多接入边缘计算的交接认证密钥交换
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-22 DOI: 10.1016/j.jnca.2024.104071
Yuxin Xia , Jie Zhang , Ka Lok Man , Yuji Dong
Authenticated Key Exchange (AKE) has been playing a significant role in ensuring communication security. However, in some Multi-access Edge Computing (MEC) scenarios where a moving end-node switchedly connects to a sequence of edge-nodes, it is costly in terms of time and computing resources to repeatedly run AKE protocols between the end-node and each edge-node. Moreover, the cloud needs to be involved to assist the authentication between them, which goes against MEC’s purpose of bringing cloud services from cloud to closer to end-user. To address the above problems, this paper proposes a new type of AKE, named as Handover Authenticated Key Exchange (HAKE). In HAKE, an earlier AKE procedure handovers authentication materials and some parameters to its temporally next AKE procedure, thereby saving resources and reducing the participation of remote cloud. Following the framework of HAKE, we propose a concrete HAKE protocol based on Elliptic Curve Diffie–Hellman (ECDH) key exchange and ratcheted key exchange. Then we verify its security via Burrows-Abadi-Needham (BAN) logic and the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. Finally, we evaluate and test its performance. The results show that the HAKE protocol achieves security goals and reduces communication and computation costs compared to similar protocols.
验证密钥交换(AKE)在确保通信安全方面一直发挥着重要作用。然而,在某些多接入边缘计算(MEC)场景中,移动终端节点会切换连接到一系列边缘节点,在终端节点和每个边缘节点之间重复运行 AKE 协议会耗费大量时间和计算资源。此外,还需要云参与协助它们之间的身份验证,这与 MEC 将云服务从云端带到更接近终端用户的目的背道而驰。为解决上述问题,本文提出了一种新型 AKE,即 "切换认证密钥交换"(Handover Authenticated Key Exchange,HAKE)。在 HAKE 中,前一个 AKE 程序将认证材料和一些参数移交给其在时间上的下一个 AKE 程序,从而节省了资源并减少了远程云的参与。按照 HAKE 的框架,我们提出了一种基于椭圆曲线 Diffie-Hellman 密钥交换和梯度密钥交换的具体 HAKE 协议。然后,我们通过Burrows-Abadi-Needham(BAN)逻辑和互联网安全协议与应用自动验证(AVISPA)工具来验证其安全性。最后,我们对其性能进行了评估和测试。结果表明,与类似协议相比,HAKE 协议实现了安全目标,并降低了通信和计算成本。
{"title":"Handover Authenticated Key Exchange for Multi-access Edge Computing","authors":"Yuxin Xia ,&nbsp;Jie Zhang ,&nbsp;Ka Lok Man ,&nbsp;Yuji Dong","doi":"10.1016/j.jnca.2024.104071","DOIUrl":"10.1016/j.jnca.2024.104071","url":null,"abstract":"<div><div>Authenticated Key Exchange (AKE) has been playing a significant role in ensuring communication security. However, in some Multi-access Edge Computing (MEC) scenarios where a moving end-node switchedly connects to a sequence of edge-nodes, it is costly in terms of time and computing resources to repeatedly run AKE protocols between the end-node and each edge-node. Moreover, the cloud needs to be involved to assist the authentication between them, which goes against MEC’s purpose of bringing cloud services from cloud to closer to end-user. To address the above problems, this paper proposes a new type of AKE, named as Handover Authenticated Key Exchange (HAKE). In HAKE, an earlier AKE procedure handovers authentication materials and some parameters to its temporally next AKE procedure, thereby saving resources and reducing the participation of remote cloud. Following the framework of HAKE, we propose a concrete HAKE protocol based on Elliptic Curve Diffie–Hellman (ECDH) key exchange and ratcheted key exchange. Then we verify its security via Burrows-Abadi-Needham (BAN) logic and the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. Finally, we evaluate and test its performance. The results show that the HAKE protocol achieves security goals and reduces communication and computation costs compared to similar protocols.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"234 ","pages":"Article 104071"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1