首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
Third layer blockchains are being rapidly developed: Addressing state-of-the-art paradigms and future horizons 第三层区块链正在迅速发展:应对最先进的模式和未来前景
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-28 DOI: 10.1016/j.jnca.2024.104044
Saeed Banaeian Far, Seyed Mojtaba Hosseini Bamakan
Undoubtedly, blockchain technology has emerged as one of the most fascinating advancements in recent decades. Its rapid development has attracted a diverse range of experts from various fields. Over the past five years, numerous blockchains have been launched, hosting a multitude of applications with varying objectives. However, a key limitation of blockchain-based services and applications is their isolation within their respective host blockchains, preventing them from recording or accessing data from other blockchains. This limitation has spurred developers to explore solutions for connecting different blockchains without relying on centralized intermediaries. This new wave of projects, officially called Layer 3 projects (L3) initiatives, has introduced innovative concepts like cross-chain transactions, multi-chain frameworks, hyper-chains, and more. This study provides an overview of these significant concepts and L3 projects while categorizing them into interoperability and scalability solutions. We then discuss opportunities, challenges, and future horizons of L3 solutions and present a SWOT (Strengths–Weaknesses–Opportunities–Threats) analysis of the two groups of L3 solutions and all other proposals. As an important part, we introduce the concept of Universal decentralized finance (DeFi) as one the most exciting applications of L3s which decreases transaction costs, enhances the security of crowdfunding, and provides many improvements in distributed lending-borrowing processes. The final part of this study maps the blockchain’s triangle problem on L3s and identifies current challenges from the L3’s perspective. Ultimately, the future directions of L3 for both academic and industry sectors are discussed.
毫无疑问,区块链技术已成为近几十年来最吸引人的进步之一。它的快速发展吸引了来自不同领域的各类专家。在过去五年中,无数区块链相继问世,承载着目标各异的众多应用。然而,基于区块链的服务和应用的一个主要限制是它们在各自的主机区块链内是孤立的,无法记录或访问其他区块链的数据。这一限制促使开发人员探索不依赖中心化中介而连接不同区块链的解决方案。这一新的项目浪潮被正式称为第三层项目(L3)计划,引入了跨链交易、多链框架、超链等创新概念。本研究概述了这些重要概念和 L3 项目,同时将它们归类为互操作性和可扩展性解决方案。然后,我们讨论了 L3 解决方案的机遇、挑战和未来前景,并对两组 L3 解决方案和所有其他提案进行了 SWOT(优势-劣势-机遇-威胁)分析。作为重要部分,我们介绍了通用去中心化金融(DeFi)的概念,这是 L3 最令人兴奋的应用之一,它降低了交易成本,增强了众筹的安全性,并为分布式借贷流程提供了许多改进。本研究的最后一部分描绘了区块链在 L3 上的三角问题,并从 L3 的角度指出了当前面临的挑战。最后,还讨论了 L3 在学术界和工业界的未来发展方向。
{"title":"Third layer blockchains are being rapidly developed: Addressing state-of-the-art paradigms and future horizons","authors":"Saeed Banaeian Far,&nbsp;Seyed Mojtaba Hosseini Bamakan","doi":"10.1016/j.jnca.2024.104044","DOIUrl":"10.1016/j.jnca.2024.104044","url":null,"abstract":"<div><div>Undoubtedly, blockchain technology has emerged as one of the most fascinating advancements in recent decades. Its rapid development has attracted a diverse range of experts from various fields. Over the past five years, numerous blockchains have been launched, hosting a multitude of applications with varying objectives. However, a key limitation of blockchain-based services and applications is their isolation within their respective host blockchains, preventing them from recording or accessing data from other blockchains. This limitation has spurred developers to explore solutions for connecting different blockchains without relying on centralized intermediaries. This new wave of projects, officially called Layer 3 projects (L3) initiatives, has introduced innovative concepts like cross-chain transactions, multi-chain frameworks, hyper-chains, and more. This study provides an overview of these significant concepts and L3 projects while categorizing them into interoperability and scalability solutions. We then discuss opportunities, challenges, and future horizons of L3 solutions and present a SWOT (Strengths–Weaknesses–Opportunities–Threats) analysis of the two groups of L3 solutions and all other proposals. As an important part, we introduce the concept of Universal decentralized finance (DeFi) as one the most exciting applications of L3s which decreases transaction costs, enhances the security of crowdfunding, and provides many improvements in distributed lending-borrowing processes. The final part of this study maps the blockchain’s triangle problem on L3s and identifies current challenges from the L3’s perspective. Ultimately, the future directions of L3 for both academic and industry sectors are discussed.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104044"},"PeriodicalIF":7.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robustness of multilayer interdependent higher-order network 多层相互依存高阶网络的鲁棒性
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-24 DOI: 10.1016/j.jnca.2024.104047
Hao Peng , Yifan Zhao , Dandan Zhao , Bo Zhang , Cheng Qian , Ming Zhong , Jianmin Han , Xiaoyang Liu , Wei Wang
In real-world complex systems, most networks are interconnected with other networks through interlayer dependencies, forming multilayer interdependent networks. In each system, the interactions between nodes are not limited to pairwise but also exist in a higher-order interaction composed of three or more individuals, thus inducing a multilayer interdependent higher-order network (MIHN). First, we build four types of artificial MIHN models (i.e., chain-like, tree-like, star-like and ring-like), in which the higher-order interactions are described by the simplicial complexes, and the interlayer dependency is built via a one-to-one matching dependency link. Then, we propose a cascading failure model on MIHN and suggest a corresponding percolation-based theory to study the robustness of MIHN by investigating the giant connected components (GCC) and percolation threshold. We find that the density of the simplicial complexes and the number of layers of the network affect its penetration behavior. When the density of simplicial complexes exceeds a certain threshold, the network has a double transition, and the increase in network layers significantly enhances the vulnerability of MIHN. By comparing the simulation results of MIHNs with four types, we observe that under the same density of simplicial complexes, the size of the GCC is independent of the topological structures of MIHN after removing a certain number of nodes. Although the cascading failure process of MIHNs with different structures is different, the final results tend to be the same. We further analyze in detail the cascading failure process of MIHN with different structures and elucidate the factors influencing the speed of cascading failures. Among these four types of MIHNs, the chain-like MIHN has the slowest cascading failure rate and more stable robustness compared to the other three structures, followed by the tree-like MIHN and star-like MIHN. The ring-like MIHN has the fastest cascading failure rate and weakest robustness due to its ring structure. Finally, we give the time required for the MIHN with different structures to reach the stable state during the cascading failure process and find that the closer to the percolation threshold, the more time the network requires to reach the stable state.
在现实世界的复杂系统中,大多数网络通过层间依赖关系与其他网络相互连接,形成多层相互依存网络。在每个系统中,节点之间的相互作用不仅限于成对,还存在由三个或更多个体组成的高阶相互作用,从而诱发多层相互依赖高阶网络(MIHN)。首先,我们建立了四种人工 MIHN 模型(即链状、树状、星状和环状),其中高阶交互由简单复合物描述,层间依赖通过一对一匹配的依赖链接建立。然后,我们提出了 MIHN 的级联失效模型,并提出了相应的基于渗流的理论,通过研究巨连通分量(GCC)和渗流阈值来研究 MIHN 的鲁棒性。我们发现,简单复数的密度和网络的层数会影响其渗透行为。当简单复数密度超过一定阈值时,网络会出现双重过渡,网络层数的增加会显著增强 MIHN 的脆弱性。通过比较四种类型 MIHN 的仿真结果,我们发现在相同的简单复数密度下,去除一定数量节点后,GCC 的大小与 MIHN 的拓扑结构无关。虽然不同结构的 MIHN 级联失效过程不同,但最终结果趋于一致。我们进一步详细分析了不同结构 MIHN 的级联失效过程,并阐明了级联失效速度的影响因素。在这四种MIHN中,链状MIHN的级联失效速度最慢,鲁棒性也比其他三种结构更稳定,其次是树状MIHN和星状MIHN。环状 MIHN 的级联失效率最快,但由于其环状结构,鲁棒性最弱。最后,我们给出了不同结构的 MIHN 在级联失效过程中达到稳定状态所需的时间,发现越接近渗流阈值,网络达到稳定状态所需的时间越长。
{"title":"Robustness of multilayer interdependent higher-order network","authors":"Hao Peng ,&nbsp;Yifan Zhao ,&nbsp;Dandan Zhao ,&nbsp;Bo Zhang ,&nbsp;Cheng Qian ,&nbsp;Ming Zhong ,&nbsp;Jianmin Han ,&nbsp;Xiaoyang Liu ,&nbsp;Wei Wang","doi":"10.1016/j.jnca.2024.104047","DOIUrl":"10.1016/j.jnca.2024.104047","url":null,"abstract":"<div><div>In real-world complex systems, most networks are interconnected with other networks through interlayer dependencies, forming multilayer interdependent networks. In each system, the interactions between nodes are not limited to pairwise but also exist in a higher-order interaction composed of three or more individuals, thus inducing a multilayer interdependent higher-order network (MIHN). First, we build four types of artificial MIHN models (i.e., chain-like, tree-like, star-like and ring-like), in which the higher-order interactions are described by the simplicial complexes, and the interlayer dependency is built via a one-to-one matching dependency link. Then, we propose a cascading failure model on MIHN and suggest a corresponding percolation-based theory to study the robustness of MIHN by investigating the giant connected components (GCC) and percolation threshold. We find that the density of the simplicial complexes and the number of layers of the network affect its penetration behavior. When the density of simplicial complexes exceeds a certain threshold, the network has a double transition, and the increase in network layers significantly enhances the vulnerability of MIHN. By comparing the simulation results of MIHNs with four types, we observe that under the same density of simplicial complexes, the size of the GCC is independent of the topological structures of MIHN after removing a certain number of nodes. Although the cascading failure process of MIHNs with different structures is different, the final results tend to be the same. We further analyze in detail the cascading failure process of MIHN with different structures and elucidate the factors influencing the speed of cascading failures. Among these four types of MIHNs, the chain-like MIHN has the slowest cascading failure rate and more stable robustness compared to the other three structures, followed by the tree-like MIHN and star-like MIHN. The ring-like MIHN has the fastest cascading failure rate and weakest robustness due to its ring structure. Finally, we give the time required for the MIHN with different structures to reach the stable state during the cascading failure process and find that the closer to the percolation threshold, the more time the network requires to reach the stable state.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104047"},"PeriodicalIF":7.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PTTS: Zero-knowledge proof-based private token transfer system on Ethereum blockchain and its network flow based balance range privacy attack analysis PTTS:以太坊区块链上基于零知识证明的私人代币转移系统及其基于网络流的平衡范围隐私攻击分析
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-24 DOI: 10.1016/j.jnca.2024.104045
Goshgar Ismayilov, Can Özturan
Blockchains are decentralized and immutable databases that are shared among the nodes of the network. Although blockchains have attracted a great scale of attention in the recent years by disrupting the traditional financial systems, the transaction privacy is still a challenging issue that needs to be addressed and analyzed. We propose a Private Token Transfer System (PTTS) for the Ethereum public blockchain in the first part of this paper. For the proposed framework, zero-knowledge based protocol has been designed using Zokrates and integrated into our private token smart contract. With the help of web user interface designed, the end users can interact with the smart contract without any third-party setup. In the second part of the paper, we provide security and privacy analysis including the replay attack and the balance range privacy attack which has been modeled as a network flow problem. It is shown that in case some balance ranges are deliberately leaked out to particular organizations or adversarial entities, it is possible to extract meaningful information about the user balances by employing minimum cost flow network algorithms that have polynomial complexity. The experimental study reports the Ethereum gas consumption and proof generation times for the proposed framework. It also reports network solution times and goodness rates for a subset of addresses under the balance range privacy attack with respect to number of addresses, number of transactions and ratio of leaked transfer transaction amounts.
区块链是网络节点之间共享的去中心化、不可变的数据库。尽管近年来区块链颠覆了传统金融体系,引起了广泛关注,但交易隐私仍是一个亟待解决和分析的挑战性问题。我们在本文的第一部分提出了以太坊公共区块链的私有代币转移系统(PTTS)。对于所提出的框架,我们使用 Zokrates 设计了基于零知识的协议,并将其集成到我们的私人代币智能合约中。借助设计的网络用户界面,终端用户无需任何第三方设置即可与智能合约进行交互。在论文的第二部分,我们提供了安全和隐私分析,包括重放攻击和平衡范围隐私攻击,这已被建模为网络流问题。结果表明,在某些余额范围被故意泄露给特定组织或敌对实体的情况下,通过采用具有多项式复杂性的最低成本流量网络算法,可以提取出关于用户余额的有意义信息。实验研究报告了拟议框架的以太坊气体消耗和证明生成时间。它还报告了在余额范围隐私攻击下,针对地址数量、交易数量和泄露转账交易金额比率的子集地址的网络解决时间和良好率。
{"title":"PTTS: Zero-knowledge proof-based private token transfer system on Ethereum blockchain and its network flow based balance range privacy attack analysis","authors":"Goshgar Ismayilov,&nbsp;Can Özturan","doi":"10.1016/j.jnca.2024.104045","DOIUrl":"10.1016/j.jnca.2024.104045","url":null,"abstract":"<div><div>Blockchains are decentralized and immutable databases that are shared among the nodes of the network. Although blockchains have attracted a great scale of attention in the recent years by disrupting the traditional financial systems, the transaction privacy is still a challenging issue that needs to be addressed and analyzed. We propose a <em>P</em>rivate <em>T</em>oken <em>T</em>ransfer <em>S</em>ystem (PTTS) for the Ethereum public blockchain in the first part of this paper. For the proposed framework, zero-knowledge based protocol has been designed using Zokrates and integrated into our private token smart contract. With the help of web user interface designed, the end users can interact with the smart contract without any third-party setup. In the second part of the paper, we provide security and privacy analysis including the replay attack and the balance range privacy attack which has been modeled as a network flow problem. It is shown that in case some balance ranges are deliberately leaked out to particular organizations or adversarial entities, it is possible to extract meaningful information about the user balances by employing minimum cost flow network algorithms that have polynomial complexity. The experimental study reports the Ethereum gas consumption and proof generation times for the proposed framework. It also reports network solution times and goodness rates for a subset of addresses under the balance range privacy attack with respect to number of addresses, number of transactions and ratio of leaked transfer transaction amounts.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104045"},"PeriodicalIF":7.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging blockchain and federated learning in Edge-Fog-Cloud computing environments for intelligent decision-making with ECG data in IoT 利用边缘-雾-云计算环境中的区块链和联合学习,利用物联网中的心电图数据进行智能决策
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-19 DOI: 10.1016/j.jnca.2024.104037
Shinu M. Rajagopal , Supriya M. , Rajkumar Buyya
Blockchain technology combined with Federated Learning (FL) offers a promising solution for enhancing privacy, security, and efficiency in medical IoT applications across edge, fog, and cloud computing environments. This approach enables multiple medical IoT devices at the network edge to collaboratively train a global machine learning model without sharing raw data, addressing privacy concerns associated with centralized data storage. This paper presents a blockchain and FL-based Smart Decision Making framework for ECG data in microservice-based IoT medical applications. Leveraging edge/fog computing for real-time critical applications, the framework implements a FL model across edge, fog, and cloud layers. Evaluation criteria including energy consumption, latency, execution time, cost, and network usage show that edge-based deployment outperforms fog and cloud, with significant advantages in energy consumption (0.1% vs. Fog, 0.9% vs. Cloud), network usage (1.1% vs. Fog, 31% vs. Cloud), cost (3% vs. Fog, 20% vs. Cloud), execution time (16% vs. Fog, 28% vs. Cloud), and latency (1% vs. Fog, 79% vs. Cloud).
区块链技术与联邦学习(FL)相结合,为在边缘、雾和云计算环境中提高医疗物联网应用的隐私、安全性和效率提供了一种前景广阔的解决方案。这种方法使网络边缘的多个医疗物联网设备能够在不共享原始数据的情况下协作训练全局机器学习模型,从而解决与集中数据存储相关的隐私问题。本文针对基于微服务的物联网医疗应用中的心电图数据,提出了一个基于区块链和 FL 的智能决策框架。该框架利用边缘/雾计算实现实时关键应用,并在边缘、雾和云层中实现了 FL 模型。能耗、延迟、执行时间、成本和网络使用率等评估标准表明,基于边缘的部署优于雾和云,在能耗(0.1% 对雾,0.9% 对云)、网络使用率(1.1% 对雾,31% 对云)、成本(3% 对雾,20% 对云)、执行时间(16% 对雾,28% 对云)和延迟(1% 对雾,79% 对云)方面具有显著优势。
{"title":"Leveraging blockchain and federated learning in Edge-Fog-Cloud computing environments for intelligent decision-making with ECG data in IoT","authors":"Shinu M. Rajagopal ,&nbsp;Supriya M. ,&nbsp;Rajkumar Buyya","doi":"10.1016/j.jnca.2024.104037","DOIUrl":"10.1016/j.jnca.2024.104037","url":null,"abstract":"<div><div>Blockchain technology combined with Federated Learning (FL) offers a promising solution for enhancing privacy, security, and efficiency in medical IoT applications across edge, fog, and cloud computing environments. This approach enables multiple medical IoT devices at the network edge to collaboratively train a global machine learning model without sharing raw data, addressing privacy concerns associated with centralized data storage. This paper presents a blockchain and FL-based Smart Decision Making framework for ECG data in microservice-based IoT medical applications. Leveraging edge/fog computing for real-time critical applications, the framework implements a FL model across edge, fog, and cloud layers. Evaluation criteria including energy consumption, latency, execution time, cost, and network usage show that edge-based deployment outperforms fog and cloud, with significant advantages in energy consumption (0.1% vs. Fog, 0.9% vs. Cloud), network usage (1.1% vs. Fog, 31% vs. Cloud), cost (3% vs. Fog, 20% vs. Cloud), execution time (16% vs. Fog, 28% vs. Cloud), and latency (1% vs. Fog, 79% vs. Cloud).</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104037"},"PeriodicalIF":7.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controller load optimization strategies in Software-Defined Networking: A survey 软件定义网络中的控制器负载优化策略:调查
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-16 DOI: 10.1016/j.jnca.2024.104043
Yong Liu , Yuanhang Ge , Qian Meng , Quanze Liu
In traditional networks, the static configuration of devices increases the complexity of network management and limits the development of network functions. Software-Defined Networking (SDN) employs controllers to manage switches, thereby simplifying network management. However, with the expansion of network scale, the early single controller architecture gradually became a performance bottleneck for the entire network. To solve this problem, SDN uses multiple controllers to manage the network, which improves the scalability of the network. However, due to the dynamic change in network traffic, multi-controller architectures face the challenge of load imbalance among controllers. In recent years, researchers have proposed various novel load optimization strategies to improve resource utilization and the performance of SDN networks. This paper reviews load optimization strategies in SDN, including the latest research results in switch migration and controller placement. Subsequently, we analyze the advantages and disadvantages of existing load optimization strategies. Finally, we discuss the future development direction of the load optimization strategy.
在传统网络中,设备的静态配置增加了网络管理的复杂性,限制了网络功能的发展。软件定义网络(SDN)采用控制器管理交换机,从而简化了网络管理。然而,随着网络规模的扩大,早期的单一控制器架构逐渐成为整个网络的性能瓶颈。为解决这一问题,SDN 使用多个控制器来管理网络,从而提高了网络的可扩展性。然而,由于网络流量的动态变化,多控制器架构面临着控制器间负载不平衡的挑战。近年来,研究人员提出了各种新型负载优化策略,以提高 SDN 网络的资源利用率和性能。本文回顾了 SDN 中的负载优化策略,包括交换机迁移和控制器放置方面的最新研究成果。随后,我们分析了现有负载优化策略的优缺点。最后,我们讨论了负载优化策略的未来发展方向。
{"title":"Controller load optimization strategies in Software-Defined Networking: A survey","authors":"Yong Liu ,&nbsp;Yuanhang Ge ,&nbsp;Qian Meng ,&nbsp;Quanze Liu","doi":"10.1016/j.jnca.2024.104043","DOIUrl":"10.1016/j.jnca.2024.104043","url":null,"abstract":"<div><div>In traditional networks, the static configuration of devices increases the complexity of network management and limits the development of network functions. Software-Defined Networking (SDN) employs controllers to manage switches, thereby simplifying network management. However, with the expansion of network scale, the early single controller architecture gradually became a performance bottleneck for the entire network. To solve this problem, SDN uses multiple controllers to manage the network, which improves the scalability of the network. However, due to the dynamic change in network traffic, multi-controller architectures face the challenge of load imbalance among controllers. In recent years, researchers have proposed various novel load optimization strategies to improve resource utilization and the performance of SDN networks. This paper reviews load optimization strategies in SDN, including the latest research results in switch migration and controller placement. Subsequently, we analyze the advantages and disadvantages of existing load optimization strategies. Finally, we discuss the future development direction of the load optimization strategy.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104043"},"PeriodicalIF":7.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On challenges of sixth-generation (6G) wireless networks: A comprehensive survey of requirements, applications, and security issues 第六代(6G)无线网络的挑战:需求、应用和安全问题综合调查
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-14 DOI: 10.1016/j.jnca.2024.104040
Muhammad Sajjad Akbar , Zawar Hussain , Muhammad Ikram , Quan Z. Sheng , Subhas Chandra Mukhopadhyay
Fifth-generation (5G) wireless networks are likely to offer high data rates, increased reliability, and low delay for mobile, personal, and local area networks. Along with the rapid growth of smart wireless sensing and communication technologies, data traffic has increased significantly and existing 5G networks are not able to fully support future massive data traffic for services, storage, and processing. To meet the challenges that are ahead, both research communities and industry are exploring the sixth generation (6G) Terahertz-based wireless network that is expected to be offered to industrial users in just ten years. Gaining knowledge and understanding of the different challenges and facets of 6G is crucial in meeting the requirements of future communication and addressing evolving quality of service (QoS) demands. This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G. It covers disruptive and innovative, integration of 6G with advanced architectures and networks such as software-defined networks (SDN), network functions virtualization (NFV), Cloud/Fog computing, and Artificial Intelligence (AI) oriented technologies. The survey also addresses privacy and security concerns and provides potential futuristic use cases such as virtual reality, smart healthcare, and Industry 5.0. Furthermore, it identifies the current challenges and outlines future research directions to facilitate the deployment of 6G networks.
第五代(5G)无线网络有可能为移动、个人和局域网提供高数据速率、更高可靠性和低延迟。随着智能无线传感和通信技术的快速发展,数据流量大幅增加,现有的 5G 网络无法完全支持未来服务、存储和处理的海量数据流量。为了应对未来的挑战,研究界和工业界都在探索基于太赫兹的第六代(6G)无线网络,预计在短短十年内就能提供给工业用户。要满足未来通信的要求并满足不断发展的服务质量(QoS)需求,了解和理解 6G 的不同挑战和方面至关重要。本调查报告全面考察了与 6G 相关的规范、要求、应用和使能技术。调查内容包括 6G 与软件定义网络 (SDN)、网络功能虚拟化 (NFV)、云/雾计算和人工智能 (AI) 等先进架构和网络的颠覆性创新整合。调查还涉及隐私和安全问题,并提供了潜在的未来用例,如虚拟现实、智能医疗和工业 5.0。此外,调查还指出了当前面临的挑战,并概述了未来的研究方向,以促进 6G 网络的部署。
{"title":"On challenges of sixth-generation (6G) wireless networks: A comprehensive survey of requirements, applications, and security issues","authors":"Muhammad Sajjad Akbar ,&nbsp;Zawar Hussain ,&nbsp;Muhammad Ikram ,&nbsp;Quan Z. Sheng ,&nbsp;Subhas Chandra Mukhopadhyay","doi":"10.1016/j.jnca.2024.104040","DOIUrl":"10.1016/j.jnca.2024.104040","url":null,"abstract":"<div><div>Fifth-generation (5G) wireless networks are likely to offer high data rates, increased reliability, and low delay for mobile, personal, and local area networks. Along with the rapid growth of smart wireless sensing and communication technologies, data traffic has increased significantly and existing 5G networks are not able to fully support future massive data traffic for services, storage, and processing. To meet the challenges that are ahead, both research communities and industry are exploring the sixth generation (6G) Terahertz-based wireless network that is expected to be offered to industrial users in just ten years. Gaining knowledge and understanding of the different challenges and facets of 6G is crucial in meeting the requirements of future communication and addressing evolving quality of service (QoS) demands. This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G. It covers disruptive and innovative, integration of 6G with advanced architectures and networks such as software-defined networks (SDN), network functions virtualization (NFV), Cloud/Fog computing, and Artificial Intelligence (AI) oriented technologies. The survey also addresses privacy and security concerns and provides potential futuristic use cases such as virtual reality, smart healthcare, and Industry 5.0. Furthermore, it identifies the current challenges and outlines future research directions to facilitate the deployment of 6G networks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104040"},"PeriodicalIF":7.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep reinforcement learning approach towards distributed Function as a Service (FaaS) based edge application orchestration in cloud-edge continuum 云-边缘连续体中基于边缘应用协调的分布式功能即服务(FaaS)深度强化学习方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.jnca.2024.104042
Mina Emami Khansari, Saeed Sharifian
Serverless computing has emerged as a new cloud computing model which in contrast to IoT offers unlimited and scalable access to resources. This paradigm improves resource utilization, cost, scalability and resource management specifically in terms of irregular incoming traffic. While cloud computing has been known as a reliable computing and storage solution to host IoT applications, it is not suitable for bandwidth limited, real time and secure applications. Therefore, shifting the resources of the cloud-edge continuum towards the edge can mitigate these limitations. In serverless architecture, applications implemented as Function as a Service (FaaS), include a set of chained event-driven microservices which have to be assigned to available instances. IoT microservices orchestration is still a challenging issue in serverless computing architecture due to IoT dynamic, heterogeneous and large-scale environment with limited resources. The integration of FaaS and distributed Deep Reinforcement Learning (DRL) can transform serverless computing by improving microservice execution effectiveness and optimizing real-time application orchestration. This combination improves scalability and adaptability across the edge-cloud continuum. In this paper, we present a novel Deep Reinforcement Learning (DRL) based microservice orchestration approach for the serverless edge-cloud continuum to minimize resource utilization and delay. This approach, unlike existing methods, is distributed and requires a minimum subset of realistic data in each interval to find optimal compositions in the proposed edge serverless architecture and is thus suitable for IoT environment. Experiments conducted using a number of real-world scenarios demonstrate improvement of the number of successfully composed applications by 18%, respectively, compared to state-of-the art methods including Load Balance, Shortest Path algorithms.
无服务器计算作为一种新的云计算模式已经出现,与物联网相比,它提供了无限和可扩展的资源访问。这种模式提高了资源利用率、成本、可扩展性和资源管理,特别是在不规则输入流量方面。众所周知,云计算是托管物联网应用的可靠计算和存储解决方案,但它并不适合带宽有限、实时和安全的应用。因此,将云-边缘连续体的资源转向边缘可以缓解这些限制。在无服务器架构中,以功能即服务(FaaS)方式实施的应用包括一系列链式事件驱动微服务,这些微服务必须分配给可用实例。由于物联网的动态性、异构性和大规模环境资源有限,物联网微服务的协调仍然是无服务器计算架构中一个具有挑战性的问题。FaaS 与分布式深度强化学习(DRL)的集成可以通过提高微服务执行效率和优化实时应用协调来改变无服务器计算。这种结合提高了整个边缘-云连续体的可扩展性和适应性。在本文中,我们提出了一种基于深度强化学习(DRL)的新型微服务协调方法,用于无服务器边缘-云连续体,以最大限度地降低资源利用率和延迟。与现有方法不同的是,这种方法是分布式的,只需要每个区间的最小现实数据子集,就能在拟议的无服务器边缘架构中找到最佳组合,因此适用于物联网环境。使用大量真实场景进行的实验表明,与包括负载平衡、最短路径算法在内的最先进方法相比,成功合成的应用程序数量分别提高了 18%。
{"title":"A deep reinforcement learning approach towards distributed Function as a Service (FaaS) based edge application orchestration in cloud-edge continuum","authors":"Mina Emami Khansari,&nbsp;Saeed Sharifian","doi":"10.1016/j.jnca.2024.104042","DOIUrl":"10.1016/j.jnca.2024.104042","url":null,"abstract":"<div><div>Serverless computing has emerged as a new cloud computing model which in contrast to IoT offers unlimited and scalable access to resources. This paradigm improves resource utilization, cost, scalability and resource management specifically in terms of irregular incoming traffic. While cloud computing has been known as a reliable computing and storage solution to host IoT applications, it is not suitable for bandwidth limited, real time and secure applications. Therefore, shifting the resources of the cloud-edge continuum towards the edge can mitigate these limitations. In serverless architecture, applications implemented as Function as a Service (FaaS), include a set of chained event-driven microservices which have to be assigned to available instances. IoT microservices orchestration is still a challenging issue in serverless computing architecture due to IoT dynamic, heterogeneous and large-scale environment with limited resources. The integration of FaaS and distributed Deep Reinforcement Learning (DRL) can transform serverless computing by improving microservice execution effectiveness and optimizing real-time application orchestration. This combination improves scalability and adaptability across the edge-cloud continuum. In this paper, we present a novel Deep Reinforcement Learning (DRL) based microservice orchestration approach for the serverless edge-cloud continuum to minimize resource utilization and delay. This approach, unlike existing methods, is distributed and requires a minimum subset of realistic data in each interval to find optimal compositions in the proposed edge serverless architecture and is thus suitable for IoT environment. Experiments conducted using a number of real-world scenarios demonstrate improvement of the number of successfully composed applications by 18%, respectively, compared to state-of-the art methods including Load Balance, Shortest Path algorithms.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104042"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RT-APT: A real-time APT anomaly detection method for large-scale provenance graph RT-APT:大规模出处图的实时 APT 异常检测方法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.jnca.2024.104036
Zhengqiu Weng , Weinuo Zhang , Tiantian Zhu , Zhenhao Dou , Haofei Sun , Zhanxiang Ye , Ye Tian
Advanced Persistent Threats (APTs) are prevalent in the field of cyber attacks, where attackers employ advanced techniques to control targets and exfiltrate data without being detected by the system. Existing APT detection methods heavily rely on expert rules or specific training scenarios, resulting in the lack of both generality and reliability. Therefore, this paper proposes a novel real-time APT attack anomaly detection system for large-scale provenance graphs, named RT-APT. Firstly, a provenance graph is constructed with kernel logs, and the WL subtree kernel algorithm is utilized to aggregate contextual information of nodes in the provenance graph. In this way we obtain vector representations. Secondly, the FlexSketch algorithm transforms the streaming provenance graph into a sequence of feature vectors. Finally, the K-means clustering algorithm is performed on benign feature vector sequences, where each cluster represents a different system state. Thus, we can identify abnormal behaviors during system execution. Therefore RT-APT enables to detect unknown attacks and extract long-term system behaviors. Experiments have been carried out to explore the optimal parameter settings under which RT-APT can perform best. In addition, we compare RT-APT and the state-of-the-art approaches on three datasets, Laboratory, StreamSpot and Unicorn. Results demonstrate that our proposed method outperforms the state-of-the-art approaches from the perspective of runtime performance, memory overhead and CPU usage.
高级持续性威胁(APT)在网络攻击领域非常普遍,攻击者利用先进技术控制目标,并在不被系统检测到的情况下外流数据。现有的 APT 检测方法严重依赖专家规则或特定的训练场景,因而缺乏通用性和可靠性。因此,本文提出了一种针对大规模来源图的新型实时 APT 攻击异常检测系统,命名为 RT-APT。首先,利用内核日志构建出处图,并利用 WL 子树内核算法聚合出处图中节点的上下文信息。这样,我们就获得了向量表示。其次,FlexSketch 算法将流式来源图转换为特征向量序列。最后,在良性特征向量序列上执行 K-means 聚类算法,每个聚类代表不同的系统状态。因此,我们可以识别系统执行过程中的异常行为。因此,RT-APT 能够检测未知攻击并提取长期系统行为。我们通过实验探索了 RT-APT 性能最佳的参数设置。此外,我们还在实验室、StreamSpot 和 Unicorn 三个数据集上比较了 RT-APT 和最先进的方法。结果表明,从运行时性能、内存开销和 CPU 占用率的角度来看,我们提出的方法优于最先进的方法。
{"title":"RT-APT: A real-time APT anomaly detection method for large-scale provenance graph","authors":"Zhengqiu Weng ,&nbsp;Weinuo Zhang ,&nbsp;Tiantian Zhu ,&nbsp;Zhenhao Dou ,&nbsp;Haofei Sun ,&nbsp;Zhanxiang Ye ,&nbsp;Ye Tian","doi":"10.1016/j.jnca.2024.104036","DOIUrl":"10.1016/j.jnca.2024.104036","url":null,"abstract":"<div><div>Advanced Persistent Threats (APTs) are prevalent in the field of cyber attacks, where attackers employ advanced techniques to control targets and exfiltrate data without being detected by the system. Existing APT detection methods heavily rely on expert rules or specific training scenarios, resulting in the lack of both generality and reliability. Therefore, this paper proposes a novel real-time APT attack anomaly detection system for large-scale provenance graphs, named RT-APT. Firstly, a provenance graph is constructed with kernel logs, and the WL subtree kernel algorithm is utilized to aggregate contextual information of nodes in the provenance graph. In this way we obtain vector representations. Secondly, the FlexSketch algorithm transforms the streaming provenance graph into a sequence of feature vectors. Finally, the K-means clustering algorithm is performed on benign feature vector sequences, where each cluster represents a different system state. Thus, we can identify abnormal behaviors during system execution. Therefore RT-APT enables to detect unknown attacks and extract long-term system behaviors. Experiments have been carried out to explore the optimal parameter settings under which RT-APT can perform best. In addition, we compare RT-APT and the state-of-the-art approaches on three datasets, Laboratory, StreamSpot and Unicorn. Results demonstrate that our proposed method outperforms the state-of-the-art approaches from the perspective of runtime performance, memory overhead and CPU usage.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104036"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint optimization scheme for task offloading and resource allocation based on MO-MFEA algorithm in intelligent transportation scenarios 基于 MO-MFEA 算法的智能交通场景下任务卸载和资源分配联合优化方案
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.jnca.2024.104039
Mingyang Zhao, Chengtai Liu, Sifeng Zhu
With the surge of transportation data and diversification of services, the resources for data processing in intelligent transportation systems become more limited. In order to solve this problem, this paper studies the problem of computation offloading and resource allocation adopting edge computing, NOMA communication technology and edge(content) caching technology in intelligent transportation systems. The goal is to minimize the time consumption and energy consumption of the system for processing structured tasks of terminal devices by jointly optimizing the offloading decisions, caching strategies, computation resource allocation and transmission power allocation. This problem is a mixed integer nonlinear programming problem that is nonconvex. In order to solve this challenging problem, proposed a multi-task multi-objective optimization algorithm (MO-MFEA-S) with adaptive knowledge migration based on MO-MFEA. The results of a large number of simulation experiments demonstrate the convergence and effectiveness of MO-MFEA-S.
随着交通数据的激增和服务的多样化,智能交通系统中用于数据处理的资源变得越来越有限。为了解决这一问题,本文研究了智能交通系统中采用边缘计算、NOMA 通信技术和边缘(内容)缓存技术的计算卸载和资源分配问题。目标是通过联合优化卸载决策、缓存策略、计算资源分配和传输功率分配,最大限度地减少系统处理终端设备结构化任务的时间消耗和能源消耗。这个问题是一个非凸的混合整数非线性编程问题。为了解决这个具有挑战性的问题,提出了一种基于 MO-MFEA 的自适应知识迁移的多任务多目标优化算法(MO-MFEA-S)。大量仿真实验结果证明了 MO-MFEA-S 的收敛性和有效性。
{"title":"Joint optimization scheme for task offloading and resource allocation based on MO-MFEA algorithm in intelligent transportation scenarios","authors":"Mingyang Zhao,&nbsp;Chengtai Liu,&nbsp;Sifeng Zhu","doi":"10.1016/j.jnca.2024.104039","DOIUrl":"10.1016/j.jnca.2024.104039","url":null,"abstract":"<div><div>With the surge of transportation data and diversification of services, the resources for data processing in intelligent transportation systems become more limited. In order to solve this problem, this paper studies the problem of computation offloading and resource allocation adopting edge computing, NOMA communication technology and edge(content) caching technology in intelligent transportation systems. The goal is to minimize the time consumption and energy consumption of the system for processing structured tasks of terminal devices by jointly optimizing the offloading decisions, caching strategies, computation resource allocation and transmission power allocation. This problem is a mixed integer nonlinear programming problem that is nonconvex. In order to solve this challenging problem, proposed a multi-task multi-objective optimization algorithm (MO-MFEA-S) with adaptive knowledge migration based on MO-MFEA. The results of a large number of simulation experiments demonstrate the convergence and effectiveness of MO-MFEA-S.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104039"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMUNE: A novel evolutionary algorithm for influence maximization in UAV networks IMUNE:无人飞行器网络中影响最大化的新型进化算法
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-10 DOI: 10.1016/j.jnca.2024.104038
Jiaqi Chen , Shuhang Han , Donghai Tian , Changzhen Hu
In a network, influence maximization addresses identifying an optimal set of nodes to initiate influence propagation, thereby maximizing the influence spread. Current approaches for influence maximization encounter limitations in accuracy and efficiency. Furthermore, most existing methods are aimed at the IC (Independent Cascade) diffusion model, and few solutions concern dynamic networks. In this study, we focus on dynamic networks consisting of UAV (Unmanned Aerial Vehicle) clusters that perform coverage tasks and introduce IMUNE, an evolutionary algorithm for influence maximization in UAV networks. We first generate dynamic networks that simulate UAV coverage tasks and give the representation of dynamic networks. Novel fitness functions in the evolutionary algorithm are designed to estimate the influence ability of a set of seed nodes in a dynamic process. On this basis, an integrated fitness function is proposed to fit both the IC and SI (Susceptible–Infected) models. IMUNE can find seed nodes for maximizing influence spread in dynamic UAV networks with different diffusion models through the improvements in fitness functions and search strategies. Experimental results on UAV network datasets show the effectiveness and efficiency of the IMUNE algorithm in solving influence maximization problems.
在网络中,影响力最大化是指确定一组最佳节点来启动影响力传播,从而实现影响力传播的最大化。目前的影响力最大化方法在准确性和效率方面都存在局限性。此外,大多数现有方法都是针对 IC(独立级联)扩散模型的,很少有解决方案涉及动态网络。在本研究中,我们将重点放在由执行覆盖任务的无人机(UAV)集群组成的动态网络上,并引入了 IMUNE,一种用于在无人机网络中实现影响力最大化的进化算法。我们首先生成模拟无人机覆盖任务的动态网络,并给出动态网络的表示方法。进化算法中新颖的适应度函数旨在估算一组种子节点在动态过程中的影响能力。在此基础上,提出了一种综合适配函数,可同时适用于 IC 和 SI(易受感染)模型。通过改进适配函数和搜索策略,IMUNE 可以在具有不同扩散模型的动态无人机网络中找到影响传播最大化的种子节点。在无人机网络数据集上的实验结果表明,IMUNE 算法在解决影响力最大化问题上是有效和高效的。
{"title":"IMUNE: A novel evolutionary algorithm for influence maximization in UAV networks","authors":"Jiaqi Chen ,&nbsp;Shuhang Han ,&nbsp;Donghai Tian ,&nbsp;Changzhen Hu","doi":"10.1016/j.jnca.2024.104038","DOIUrl":"10.1016/j.jnca.2024.104038","url":null,"abstract":"<div><div>In a network, influence maximization addresses identifying an optimal set of nodes to initiate influence propagation, thereby maximizing the influence spread. Current approaches for influence maximization encounter limitations in accuracy and efficiency. Furthermore, most existing methods are aimed at the IC (Independent Cascade) diffusion model, and few solutions concern dynamic networks. In this study, we focus on dynamic networks consisting of UAV (Unmanned Aerial Vehicle) clusters that perform coverage tasks and introduce IMUNE, an evolutionary algorithm for influence maximization in UAV networks. We first generate dynamic networks that simulate UAV coverage tasks and give the representation of dynamic networks. Novel fitness functions in the evolutionary algorithm are designed to estimate the influence ability of a set of seed nodes in a dynamic process. On this basis, an integrated fitness function is proposed to fit both the IC and SI (Susceptible–Infected) models. IMUNE can find seed nodes for maximizing influence spread in dynamic UAV networks with different diffusion models through the improvements in fitness functions and search strategies. Experimental results on UAV network datasets show the effectiveness and efficiency of the IMUNE algorithm in solving influence maximization problems.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104038"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1