首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
Joint container orchestrating and request routing for serverless edge computing-based simulation applications 基于无服务器边缘计算的模拟应用的联合容器编排和请求路由
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-08-08 DOI: 10.1016/j.jnca.2025.104284
Yong Peng , Miao Zhang , Zhi Zhou , Hao Huang
Serverless edge computing dynamically invokes functions based on events, enabling on-demand code execution at the network edge and minimizing infrastructure management overhead. This computing paradigm is naturally suitable for event-driven distributed simulation applications, which involves frequent event interactions and stringent latency constraints. When running on top of geographically dispersed edge clouds, container orchestration and request routing have a significant impact on the performance of serverless edge computing-based simulations. In this paper, we propose an online orchestration framework for cross-edge serverless computing-based-simulations, which aims to minimize the resource cost and carbon emission under performance (i.e., latency) constraint, via jointly optimizing the container retention and requesting routing on-the-fly. This long-term cost minimization problem is difficult since it is NP-hard and involves future uncertain information. To simultaneously address these dual challenges, we carefully combine an online optimization technique with an approximate optimization method in a joint optimization framework. This framework first temporally decomposes the long-term time-coupling problem into a series of one-shot fractional problem via Lyapunov optimization, and then applies randomized dependent scheme to round the fractional solution to a near-optimal integral solution. The resulting online algorithm achieves an outstanding performance, as verified by extensive trace-driven simulations.
无服务器边缘计算基于事件动态调用函数,支持在网络边缘按需执行代码,并最大限度地减少基础设施管理开销。这种计算范式自然适用于事件驱动的分布式仿真应用程序,它涉及频繁的事件交互和严格的延迟约束。当在地理上分散的边缘云上运行时,容器编排和请求路由对基于无服务器边缘计算的模拟的性能有重大影响。在本文中,我们提出了一个基于跨边缘无服务器计算模拟的在线编排框架,旨在通过联合优化容器保留和动态请求路由,在性能(即延迟)约束下最大限度地降低资源成本和碳排放。这个长期成本最小化问题是困难的,因为它是np困难的,并且涉及到未来的不确定信息。为了同时解决这些双重挑战,我们在联合优化框架中仔细地将在线优化技术与近似优化方法相结合。该框架首先通过Lyapunov优化将长期时间耦合问题暂时分解为一系列单次分数阶问题,然后采用随机依赖格式将分数阶解舍入到一个近最优积分解。由此产生的在线算法取得了出色的性能,并通过大量的跟踪驱动仿真得到了验证。
{"title":"Joint container orchestrating and request routing for serverless edge computing-based simulation applications","authors":"Yong Peng ,&nbsp;Miao Zhang ,&nbsp;Zhi Zhou ,&nbsp;Hao Huang","doi":"10.1016/j.jnca.2025.104284","DOIUrl":"10.1016/j.jnca.2025.104284","url":null,"abstract":"<div><div>Serverless edge computing dynamically invokes functions based on events, enabling on-demand code execution at the network edge and minimizing infrastructure management overhead. This computing paradigm is naturally suitable for event-driven distributed simulation applications, which involves frequent event interactions and stringent latency constraints. When running on top of geographically dispersed edge clouds, container orchestration and request routing have a significant impact on the performance of serverless edge computing-based simulations. In this paper, we propose an online orchestration framework for cross-edge serverless computing-based-simulations, which aims to minimize the resource cost and carbon emission under performance (i.e., latency) constraint, via jointly optimizing the container retention and requesting routing on-the-fly. This long-term cost minimization problem is difficult since it is NP-hard and involves future uncertain information. To simultaneously address these dual challenges, we carefully combine an online optimization technique with an approximate optimization method in a joint optimization framework. This framework first temporally decomposes the long-term time-coupling problem into a series of one-shot fractional problem via Lyapunov optimization, and then applies randomized dependent scheme to round the fractional solution to a near-optimal integral solution. The resulting online algorithm achieves an outstanding performance, as verified by extensive trace-driven simulations.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104284"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing function performance and cluster load in serverless computing: A reinforcement learning solution 无服务器计算中平衡功能性能和集群负载:一种强化学习解决方案
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-08-26 DOI: 10.1016/j.jnca.2025.104299
Menglin Zhou , Bingbing Zheng , Li Pan , Shijun Liu
Serverless computing, as an emerging cloud computing service model, enables developers to focus on business logic without concerning underlying resource management by decomposing applications into fine-grained functions that execute on demand. However, in heterogeneous server cluster environments, the bursty and transient nature of function requests presents significant resource scheduling challenges. To ensure the performance of function execution, newly created function instances are often scheduled to nodes with abundant resources. This leads to resource allocation imbalances under high loads, which could potentially trigger node failures. In this paper we model function scheduling as an optimization problem that balances performance and load. We then propose a scheduling method based on the PPO algorithm, which guides decisions by analyzing node load and performance metrics in real time. For validation, we conducted experiments on the OpenFaaS platform using both real and simulated traces. The experimental results demonstrate that our method not only effectively reduces the risks associated with load imbalance but also achieves improvements in function performance.
无服务器计算作为一种新兴的云计算服务模型,通过将应用程序分解为按需执行的细粒度功能,使开发人员能够专注于业务逻辑,而无需关注底层资源管理。然而,在异构服务器集群环境中,功能请求的突发和瞬态特性带来了重大的资源调度挑战。为了保证函数执行的性能,通常将新创建的函数实例调度到资源丰富的节点。这将导致高负载下的资源分配不平衡,从而可能触发节点故障。本文将函数调度建模为一个平衡性能和负载的优化问题。然后,我们提出了一种基于PPO算法的调度方法,该方法通过实时分析节点负载和性能指标来指导决策。为了验证,我们在OpenFaaS平台上使用真实和模拟痕迹进行了实验。实验结果表明,我们的方法不仅有效地降低了负载不平衡带来的风险,而且在功能性能上也得到了提高。
{"title":"Balancing function performance and cluster load in serverless computing: A reinforcement learning solution","authors":"Menglin Zhou ,&nbsp;Bingbing Zheng ,&nbsp;Li Pan ,&nbsp;Shijun Liu","doi":"10.1016/j.jnca.2025.104299","DOIUrl":"10.1016/j.jnca.2025.104299","url":null,"abstract":"<div><div>Serverless computing, as an emerging cloud computing service model, enables developers to focus on business logic without concerning underlying resource management by decomposing applications into fine-grained functions that execute on demand. However, in heterogeneous server cluster environments, the bursty and transient nature of function requests presents significant resource scheduling challenges. To ensure the performance of function execution, newly created function instances are often scheduled to nodes with abundant resources. This leads to resource allocation imbalances under high loads, which could potentially trigger node failures. In this paper we model function scheduling as an optimization problem that balances performance and load. We then propose a scheduling method based on the PPO algorithm, which guides decisions by analyzing node load and performance metrics in real time. For validation, we conducted experiments on the OpenFaaS platform using both real and simulated traces. The experimental results demonstrate that our method not only effectively reduces the risks associated with load imbalance but also achieves improvements in function performance.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104299"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144913115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and efficient data collaboration in cloud computing: Flexible delegation via hierarchical attribute-based signature 云计算中安全高效的数据协作:通过分层属性签名的灵活委托
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-09-16 DOI: 10.1016/j.jnca.2025.104328
Wenrui Jiang, Yongjian Liao, Qishan Gao, Han Xu, Hongwei Wang
Data collaboration allows multiple parties to jointly share and modify data stored in the cloud server. As unauthorized users may create or modify the shared data as they want by tampering with requests sent by authorized users to replace them with what the unauthorized users want to send, secure data collaboration in cloud computing requires data integrity protection of requests and precise privilege verification of users. However, while maintaining data integrity, it is difficult for current signature schemes to achieve the following demands: fine-grained access control, high scalability, a flexible and controllable hierarchical delegation mechanism, and efficient signing and verification. Therefore, we designed a scalable and flexible hierarchical attribute-based signature (HABS) model and proposed a signing policy HABS construction using the linear secret sharing scheme to construct an access structure. Furthermore, we proved the unforgeability of our HABS scheme in the standard model. We also analyzed and tested the performance of our HABS scheme and related scheme, and we found that our scheme has less signing computation consumption in large-scale systems with complex policies. Finally, we provided a specified application scenario of HABS used in data collaboration based on cloud computing.
数据协作允许多方共同共享和修改存储在云服务器中的数据。由于未经授权的用户可以通过篡改授权用户发送的请求来随意创建或修改共享数据,从而将其替换为未经授权用户想要发送的内容,因此云计算中的安全数据协作需要对请求进行数据完整性保护,并对用户进行精确的权限验证。然而,目前的签名方案在保证数据完整性的同时,难以实现细粒度的访问控制、高可扩展性、灵活可控的分级授权机制、高效的签名和验证等要求。为此,我们设计了一种可扩展、灵活的分层属性签名模型,并提出了一种基于线性秘密共享方案构建访问结构的签名策略HABS构造方法。此外,我们还证明了我们的HABS方案在标准模型中的不可伪造性。对HABS方案和相关方案的性能进行了分析和测试,发现该方案在具有复杂策略的大型系统中签名计算消耗较少。最后,给出了HABS在基于云计算的数据协作中的具体应用场景。
{"title":"Secure and efficient data collaboration in cloud computing: Flexible delegation via hierarchical attribute-based signature","authors":"Wenrui Jiang,&nbsp;Yongjian Liao,&nbsp;Qishan Gao,&nbsp;Han Xu,&nbsp;Hongwei Wang","doi":"10.1016/j.jnca.2025.104328","DOIUrl":"10.1016/j.jnca.2025.104328","url":null,"abstract":"<div><div>Data collaboration allows multiple parties to jointly share and modify data stored in the cloud server. As unauthorized users may create or modify the shared data as they want by tampering with requests sent by authorized users to replace them with what the unauthorized users want to send, secure data collaboration in cloud computing requires data integrity protection of requests and precise privilege verification of users. However, while maintaining data integrity, it is difficult for current signature schemes to achieve the following demands: fine-grained access control, high scalability, a flexible and controllable hierarchical delegation mechanism, and efficient signing and verification. Therefore, we designed a scalable and flexible hierarchical attribute-based signature (HABS) model and proposed a signing policy HABS construction using the linear secret sharing scheme to construct an access structure. Furthermore, we proved the unforgeability of our HABS scheme in the standard model. We also analyzed and tested the performance of our HABS scheme and related scheme, and we found that our scheme has less signing computation consumption in large-scale systems with complex policies. Finally, we provided a specified application scenario of HABS used in data collaboration based on cloud computing.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104328"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C-PFL: A committee-based personalized federated learning framework C-PFL:基于委员会的个性化联邦学习框架
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-09-16 DOI: 10.1016/j.jnca.2025.104327
Lifan Pan , Hao Guo , Wanxin Li
Federated Learning (FL) is an emerging machine learning paradigm that enables multiple parties to train a shared model while preserving data privacy collaboratively. However, malicious clients pose a significant threat to FL systems. This interference not only deteriorates model performance but also exacerbates the unfairness of the global model caused by data heterogeneity, leading to inconsistent performance across clients. We propose C-PFL, a committee-based personalized FL framework that improves both robustness and personalization. In contrast to prior approaches such as FedProto (which relies on the exchange of class prototypes), Ditto (which employs regularization between global and local models), and FedBABU (which freezes the classifier head during federated training), C-PFL introduces two principal innovations. C-PFL adopts a split-model design, updating only a shared backbone during global training while fine-tuning a personalized head locally. A dynamic committee of high-contribution clients validates submitted updates without public data, filtering low-quality or adversarial contributions before aggregation. Experiments on MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and AGNews show that C-PFL outperforms six state-of-the-art personalized FL baselines by up to 2.89% in non-adversarial settings, and by as much as 6.96% under 40% malicious clients. These results demonstrate C-PFL’s ability to sustain high accuracy and stability across diverse non-IID scenarios, even with significant adversarial participation.
联邦学习(FL)是一种新兴的机器学习范式,它使多方能够在协作保护数据隐私的同时训练共享模型。然而,恶意客户端对FL系统构成了重大威胁。这种干扰不仅会降低模型性能,还会加剧由于数据异构而导致的全局模型的不公平性,从而导致客户机之间的性能不一致。我们提出了C-PFL,一个基于委员会的个性化FL框架,提高了鲁棒性和个性化。与之前的方法,如FedProto(依赖于类原型的交换)、Ditto(在全局和局部模型之间使用正则化)和FedBABU(在联邦训练期间冻结分类器头部)相比,C-PFL引入了两个主要的创新。C-PFL采用分体式设计,在全局训练时只更新共享主干,而局部微调个性化头部。一个由高贡献客户端组成的动态委员会在没有公共数据的情况下验证提交的更新,在聚合之前过滤低质量或对抗性的贡献。在MNIST、时尚-MNIST、CIFAR-10、CIFAR-100和AGNews上进行的实验表明,C-PFL在非对抗性环境下比六个最先进的个性化FL基线高出2.89%,在40%恶意客户端下高出6.96%。这些结果证明了C-PFL能够在不同的非iid场景中保持高精度和稳定性,即使有明显的对抗参与。
{"title":"C-PFL: A committee-based personalized federated learning framework","authors":"Lifan Pan ,&nbsp;Hao Guo ,&nbsp;Wanxin Li","doi":"10.1016/j.jnca.2025.104327","DOIUrl":"10.1016/j.jnca.2025.104327","url":null,"abstract":"<div><div>Federated Learning (FL) is an emerging machine learning paradigm that enables multiple parties to train a shared model while preserving data privacy collaboratively. However, malicious clients pose a significant threat to FL systems. This interference not only deteriorates model performance but also exacerbates the unfairness of the global model caused by data heterogeneity, leading to inconsistent performance across clients. We propose C-PFL, a committee-based personalized FL framework that improves both robustness and personalization. In contrast to prior approaches such as FedProto (which relies on the exchange of class prototypes), Ditto (which employs regularization between global and local models), and FedBABU (which freezes the classifier head during federated training), C-PFL introduces two principal innovations. C-PFL adopts a split-model design, updating only a shared backbone during global training while fine-tuning a personalized head locally. A dynamic committee of high-contribution clients validates submitted updates without public data, filtering low-quality or adversarial contributions before aggregation. Experiments on MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and AGNews show that C-PFL outperforms six state-of-the-art personalized FL baselines by up to 2.89% in non-adversarial settings, and by as much as 6.96% under 40% malicious clients. These results demonstrate C-PFL’s ability to sustain high accuracy and stability across diverse non-IID scenarios, even with significant adversarial participation.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104327"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time high-resolution hardware–software co-design neural architecture search for unmanned mobile platforms 面向无人驾驶移动平台的实时高分辨率软硬件协同设计神经架构搜索
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-08-20 DOI: 10.1016/j.jnca.2025.104282
ZiWen Dou, Jun Tian, HaiQuan Sang, MingMing Zhang
Traditional manually designed high-resolution networks on mobile computing platforms often struggle to balance accuracy and inference speed. To address the issue of large computational costs in high-resolution neural networks, which makes them difficult to deploy on mobile computing platforms, we simplified the traditional multi-scale feature extraction process by reducing the three-branch fusion to a two-branch fusion, establishing a lightweight network-level search space. We applied gradient descent to iteratively optimize the two-layer parameters within the search space and used the pareto optimal algorithm to balance inference speed and accuracy. After convergence, we obtained a multi-scale feature extraction neural network structure that satisfies the balance inference speed and accuracy. When combined with different feature decoders, this structure enables real-time semantic segmentation and monocular depth estimation tasks on mobile platforms. An self-constructed unmanned mobile platform, built on a mobile computing platform, was used to collect image data from real-world environments to create a custom dataset. This dataset was used to validate the perception capabilities of the designed semantic segmentation and monocular depth estimation model on the mobile platform in real-world scenarios. The experiments demonstrate that our semantic segmentation model, designed for the NVIDIA NX mobile computing platform, achieves an accuracy of 71.7% for 1024 ×2048 high-resolution images, with an inference speed of 25.25 FPS. This represents a 39.2% improvement in inference speed over existing SOTA methods. Meanwhile, our monocular depth estimation model on the NVIDIA NX achieves an absolute relative error (Abs Rel) of 0.091, with an inference speed of 14.46 FPS. This method improves inference speed by 87.7% compared to existing methods, while preserving high accuracy. The code is available: https://github.com/douziwenhit/RealtimeSeg and https://github.com/douziwenhit/RealtimeMDE.
传统的在移动计算平台上手工设计的高分辨率网络往往难以平衡准确性和推理速度。为了解决高分辨率神经网络计算成本大、难以在移动计算平台上部署的问题,我们将传统的多尺度特征提取过程简化,将三分支融合简化为两分支融合,建立了轻量级的网络级搜索空间。采用梯度下降法在搜索空间内对两层参数进行迭代优化,并采用帕累托最优算法平衡推理速度和准确率。经过收敛,得到了满足平衡推理速度和精度的多尺度特征提取神经网络结构。当与不同的特征解码器相结合时,该结构可以在移动平台上实现实时语义分割和单目深度估计任务。自建无人移动平台,建立在移动计算平台上,采集真实环境图像数据,创建自定义数据集。利用该数据集验证了所设计的语义分割和单目深度估计模型在移动平台上的感知能力。实验表明,我们针对NVIDIA NX移动计算平台设计的语义分割模型对1024张×2048高分辨率图像的分割准确率达到71.7%,推理速度达到25.25 FPS。这表明与现有SOTA方法相比,推理速度提高了39.2%。同时,我们的单目深度估计模型在NVIDIA NX上的绝对相对误差(Abs Rel)为0.091,推理速度为14.46 FPS。与现有方法相比,该方法的推理速度提高了87.7%,同时保持了较高的准确率。代码可从https://github.com/douziwenhit/RealtimeSeg和https://github.com/douziwenhit/RealtimeMDE获取。
{"title":"Real-time high-resolution hardware–software co-design neural architecture search for unmanned mobile platforms","authors":"ZiWen Dou,&nbsp;Jun Tian,&nbsp;HaiQuan Sang,&nbsp;MingMing Zhang","doi":"10.1016/j.jnca.2025.104282","DOIUrl":"10.1016/j.jnca.2025.104282","url":null,"abstract":"<div><div>Traditional manually designed high-resolution networks on mobile computing platforms often struggle to balance accuracy and inference speed. To address the issue of large computational costs in high-resolution neural networks, which makes them difficult to deploy on mobile computing platforms, we simplified the traditional multi-scale feature extraction process by reducing the three-branch fusion to a two-branch fusion, establishing a lightweight network-level search space. We applied gradient descent to iteratively optimize the two-layer parameters within the search space and used the pareto optimal algorithm to balance inference speed and accuracy. After convergence, we obtained a multi-scale feature extraction neural network structure that satisfies the balance inference speed and accuracy. When combined with different feature decoders, this structure enables real-time semantic segmentation and monocular depth estimation tasks on mobile platforms. An self-constructed unmanned mobile platform, built on a mobile computing platform, was used to collect image data from real-world environments to create a custom dataset. This dataset was used to validate the perception capabilities of the designed semantic segmentation and monocular depth estimation model on the mobile platform in real-world scenarios. The experiments demonstrate that our semantic segmentation model, designed for the NVIDIA NX mobile computing platform, achieves an accuracy of 71.7% for 1024 ×2048 high-resolution images, with an inference speed of 25.25 FPS. This represents a 39.2% improvement in inference speed over existing SOTA methods. Meanwhile, our monocular depth estimation model on the NVIDIA NX achieves an absolute relative error (Abs Rel) of 0.091, with an inference speed of 14.46 FPS. This method improves inference speed by 87.7% compared to existing methods, while preserving high accuracy. The code is available: <span><span>https://github.com/douziwenhit/RealtimeSeg</span><svg><path></path></svg></span> and <span><span>https://github.com/douziwenhit/RealtimeMDE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104282"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectrum allocation method for millimeter-wave train-ground communication in high-speed rail based on multi-agent attention 基于多智能体关注的高速铁路毫米波车地通信频谱分配方法
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-09-08 DOI: 10.1016/j.jnca.2025.104293
Yong Chen, Jiaojiao Yuan, Huaju Liu, Zhaofeng Xin
With the advancement of high-speed railways toward intelligent systems, a large number of IoT devices have been deployed in both onboard and trackside systems. The resulting surge in data transmission has intensified competition for spectrum resources, thereby significantly increasing the demand for train-ground communication systems with high capacity, low latency, and strong interference resilience.The millimeter wave (mmWave) frequency band provides a large bandwidth to support massive data transmission from IoT devices. Aiming at addressing the issues of low network capacity, high interference, and low spectral efficiency in mmWave train-ground communication systems under 5G-R for high-speed railways, we propose a multi-agent attention mechanism for mmWave spectrum allocation in train-ground communication. First, we analyzed the spectrum requirements of mmWave BS and onboard MRS, constructed a spectrum resource allocation model with the optimization objective of maximizing system network capacity, and transformed it into a Markov decision process (MDP) model. Next, considering the need for coordinated spectrum allocation and interference suppression between mmWave BS and MRS, we develop a resource optimization strategy using the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. Specifically, we combine multi head attention mechanism to improve the Critic network of MADDPG algorithm. This enhancement enables coordinated global–local strategy optimization through attention weight computation, thereby improving decision-making efficiency. Simulation results demonstrate that compared to existing methods, our algorithm achieves superior spectrum allocation performance, significantly increases network capacity while reducing interference levels, and meets the spectrum requirements of HSR communication systems.
随着高铁向智能化方向发展,大量物联网设备已部署在列车上和轨旁系统中。数据传输的激增加剧了对频谱资源的竞争,从而大大增加了对高容量、低延迟和强抗干扰能力的列车-地面通信系统的需求。毫米波(mmWave)频段提供大带宽,支持物联网设备的海量数据传输。针对5G-R高速铁路毫米波车地通信系统存在的网络容量小、干扰大、频谱效率低等问题,提出了一种车地通信毫米波频谱分配的多智能体关注机制。首先,分析了毫米波BS和机载MRS的频谱需求,构建了以系统网络容量最大化为优化目标的频谱资源分配模型,并将其转化为马尔可夫决策过程(MDP)模型。其次,考虑到毫米波BS和MRS之间协调频谱分配和干扰抑制的需要,我们开发了一种使用多智能体深度确定性策略梯度(madpg)算法的资源优化策略。具体来说,我们结合多头注意机制来改进madpg算法的Critic网络。这种增强通过计算注意力权重实现全局-局部协同策略优化,从而提高决策效率。仿真结果表明,与现有的频谱分配方法相比,该算法在显著提高网络容量的同时降低了干扰水平,能够满足高铁通信系统的频谱需求。
{"title":"Spectrum allocation method for millimeter-wave train-ground communication in high-speed rail based on multi-agent attention","authors":"Yong Chen,&nbsp;Jiaojiao Yuan,&nbsp;Huaju Liu,&nbsp;Zhaofeng Xin","doi":"10.1016/j.jnca.2025.104293","DOIUrl":"10.1016/j.jnca.2025.104293","url":null,"abstract":"<div><div>With the advancement of high-speed railways toward intelligent systems, a large number of IoT devices have been deployed in both onboard and trackside systems. The resulting surge in data transmission has intensified competition for spectrum resources, thereby significantly increasing the demand for train-ground communication systems with high capacity, low latency, and strong interference resilience.The millimeter wave (mmWave) frequency band provides a large bandwidth to support massive data transmission from IoT devices. Aiming at addressing the issues of low network capacity, high interference, and low spectral efficiency in mmWave train-ground communication systems under 5G-R for high-speed railways, we propose a multi-agent attention mechanism for mmWave spectrum allocation in train-ground communication. First, we analyzed the spectrum requirements of mmWave BS and onboard MRS, constructed a spectrum resource allocation model with the optimization objective of maximizing system network capacity, and transformed it into a Markov decision process (MDP) model. Next, considering the need for coordinated spectrum allocation and interference suppression between mmWave BS and MRS, we develop a resource optimization strategy using the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. Specifically, we combine multi head attention mechanism to improve the Critic network of MADDPG algorithm. This enhancement enables coordinated global–local strategy optimization through attention weight computation, thereby improving decision-making efficiency. Simulation results demonstrate that compared to existing methods, our algorithm achieves superior spectrum allocation performance, significantly increases network capacity while reducing interference levels, and meets the spectrum requirements of HSR communication systems.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104293"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoamML distributed continual learning: Adaptive and flexible data-driven response for disaster recovery operations RoamML分布式持续学习:用于灾难恢复操作的自适应和灵活的数据驱动响应
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-09-09 DOI: 10.1016/j.jnca.2025.104322
Simon Dahdal , Sara Cavicchi , Alessandro Gilli , Filippo Poltronieri , Mauro Tortonesi , Niranjan Suri , Cesare Stefanelli
In the aftermath of natural disasters, Human Assistance & Disaster Recovery (HADR) operations have to deal with disrupted communication networks and constrained resources. Such harsh conditions make high-communication-overhead ML approaches — either centralized or distributed — impractical, thus hindering the adoption of AI solutions to implement a critical function for HADR operations: building accurate and up-to-date situational awareness. To address this issue we developed Roaming Machine Learning (RoamML), a novel Distributed Continual Learning Framework designed for HADR operations and based on the premise that moving an ML model is more efficient and robust than either large dataset transfers or frequent model parameter updates. RoamML deploys a mobile AI agent that incrementally train models across network nodes containing yet unprocessed data; at each stop, the agent initiate a local training phase to update its internal ML model parameters. To prioritize the processing of strategically valuable data, RoamML Agents follow a navigation system based upon the concept of Data Gravity, leveraging Multi-Criteria Decision Making techniques to simultaneously consider many objectives for Agent routing optimization, including model learning efficiency and network resource utilization, while seamlessly blending subjective insights from expert judgments with objective metrics derived from quantifiable data to determine each next hop. We conducted extensive experiments to evaluate RoamML, demonstrating the framework’s efficiency to train ML models under highly dynamic, resource-constrained environments. RoamML achieves similar performance to centralized ML training under ideal network conditions and outperforms it in a more realistic scenario with reduced network resources, ultimately saving up to 75% in bandwidth utilization across all experiments.
在自然灾害之后,人类援助和灾难恢复(HADR)行动必须处理中断的通信网络和有限的资源。这种恶劣的条件使得高通信开销的机器学习方法(无论是集中式还是分布式)不切实际,从而阻碍了采用人工智能解决方案来实现HADR操作的关键功能:建立准确和最新的态势感知。为了解决这个问题,我们开发了漫游机器学习(RoamML),这是一种为HADR操作设计的新型分布式持续学习框架,其前提是移动ML模型比大型数据集传输或频繁的模型参数更新更有效和健壮。RoamML部署了一个移动AI代理,它可以跨包含未处理数据的网络节点增量训练模型;在每一站,智能体启动一个局部训练阶段来更新其内部ML模型参数。为了优先处理有战略价值的数据,RoamML代理遵循基于数据重力概念的导航系统,利用多标准决策技术同时考虑代理路由优化的许多目标,包括模型学习效率和网络资源利用,同时无缝地将来自专家判断的主观见解与来自可量化数据的客观指标相结合,以确定每个下一跳。我们进行了大量的实验来评估RoamML,证明了该框架在高度动态、资源受限的环境下训练ML模型的效率。RoamML在理想的网络条件下实现了与集中式机器学习训练相似的性能,并在更现实的场景中使用更少的网络资源,最终在所有实验中节省高达75%的带宽利用率。
{"title":"RoamML distributed continual learning: Adaptive and flexible data-driven response for disaster recovery operations","authors":"Simon Dahdal ,&nbsp;Sara Cavicchi ,&nbsp;Alessandro Gilli ,&nbsp;Filippo Poltronieri ,&nbsp;Mauro Tortonesi ,&nbsp;Niranjan Suri ,&nbsp;Cesare Stefanelli","doi":"10.1016/j.jnca.2025.104322","DOIUrl":"10.1016/j.jnca.2025.104322","url":null,"abstract":"<div><div>In the aftermath of natural disasters, Human Assistance &amp; Disaster Recovery (HADR) operations have to deal with disrupted communication networks and constrained resources. Such harsh conditions make high-communication-overhead ML approaches — either centralized or distributed — impractical, thus hindering the adoption of AI solutions to implement a critical function for HADR operations: building accurate and up-to-date situational awareness. To address this issue we developed Roaming Machine Learning (RoamML), a novel Distributed Continual Learning Framework designed for HADR operations and based on the premise that moving an ML model is more efficient and robust than either large dataset transfers or frequent model parameter updates. RoamML deploys a mobile AI agent that incrementally train models across network nodes containing yet unprocessed data; at each stop, the agent initiate a local training phase to update its internal ML model parameters. To prioritize the processing of strategically valuable data, RoamML Agents follow a navigation system based upon the concept of Data Gravity, leveraging Multi-Criteria Decision Making techniques to simultaneously consider many objectives for Agent routing optimization, including model learning efficiency and network resource utilization, while seamlessly blending subjective insights from expert judgments with objective metrics derived from quantifiable data to determine each next hop. We conducted extensive experiments to evaluate RoamML, demonstrating the framework’s efficiency to train ML models under highly dynamic, resource-constrained environments. RoamML achieves similar performance to centralized ML training under ideal network conditions and outperforms it in a more realistic scenario with reduced network resources, ultimately saving up to 75% in bandwidth utilization across all experiments.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104322"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance modelling and optimal stage assignment for multistage P4 switches 多级P4开关的性能建模和最优级分配
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-08-26 DOI: 10.1016/j.jnca.2025.104295
Geng-Li Zhou , Steven S.W. Lee , Ren-Hung Hwang , Yin-Dar Lin , Yuan-Cheng Lai
P4 programmable switches typically consist of multiple computation stages, each capable of independently executing flow rules to achieve the desired network function (NF). A network function chain (NFC) can be implemented to provide a network service by concatenating a set of NFs. This paper focuses on studying the stage-to-NF assignment problem in multistage P4 switches. We propose a greedy-based stage assignment algorithm that has been proven to optimally solve such resource allocation problems. The algorithm's key feature is its ability to address load imbalances among the NFs by considering both the packet arrival and service rates of the NFs. During each iteration of the algorithm's execution, a set of stage assignments needs to be evaluated. To efficiently determine the average packet delay for each assignment, we have developed a queuing model and derive an analytical solution. The analytical results are verified through simulation, and the gap between them is found to be negligible. Additionally, the simulation results demonstrate the algorithm's superiority in handling load imbalances among NFs. The algorithm efficiently assigns stages such that, for a set of NFCs with a constant total input rate, altering the distribution of arrival rates among the NFCs results in similar average delays. The experimental instances indicate that the variation in delay remains within 8 % after altering the arrival rate distribution among the NFCs. Furthermore, we implemented a benchmark named “Equal Stage Assignment” in which each NF is assigned an equal number of stages. Compared to the Equal Stage Assignment algorithm, the proposed stage assignment algorithm can reduce the average delay by more than 20 %, particularly in cases where the loads between NFs are imbalanced.
P4可编程交换机通常由多个计算阶段组成,每个计算阶段都能够独立执行流规则以实现所需的网络功能(NF)。NFC (network function chain)是一种通过连接一组NFs来提供网络服务的技术。本文重点研究了多级P4交换机中阶段到nf的分配问题。我们提出了一种基于贪婪的阶段分配算法,该算法已被证明可以最优地解决此类资源分配问题。该算法的关键特性是它能够通过考虑NFs的数据包到达率和服务速率来解决NFs之间的负载不平衡。在算法执行的每次迭代中,需要评估一组阶段分配。为了有效地确定每次分配的平均数据包延迟,我们建立了排队模型并推导了解析解。通过仿真验证了分析结果,发现两者之间的差距可以忽略不计。此外,仿真结果表明该算法在处理NFs之间的负载不平衡方面具有优越性。该算法有效地分配阶段,使得对于一组总输入率恒定的nfc,改变nfc之间到达率的分布会导致相似的平均延迟。实验结果表明,改变nfc之间的到达率分布后,延迟的变化保持在8%以内。此外,我们实现了一个名为“平等阶段分配”的基准,其中每个NF被分配了相同数量的阶段。与平等阶段分配算法相比,本文提出的阶段分配算法可以将平均延迟降低20%以上,特别是在NFs之间负载不平衡的情况下。
{"title":"Performance modelling and optimal stage assignment for multistage P4 switches","authors":"Geng-Li Zhou ,&nbsp;Steven S.W. Lee ,&nbsp;Ren-Hung Hwang ,&nbsp;Yin-Dar Lin ,&nbsp;Yuan-Cheng Lai","doi":"10.1016/j.jnca.2025.104295","DOIUrl":"10.1016/j.jnca.2025.104295","url":null,"abstract":"<div><div>P4 programmable switches typically consist of multiple computation stages, each capable of independently executing flow rules to achieve the desired network function (NF). A network function chain (NFC) can be implemented to provide a network service by concatenating a set of NFs. This paper focuses on studying the stage-to-NF assignment problem in multistage P4 switches. We propose a greedy-based stage assignment algorithm that has been proven to optimally solve such resource allocation problems. The algorithm's key feature is its ability to address load imbalances among the NFs by considering both the packet arrival and service rates of the NFs. During each iteration of the algorithm's execution, a set of stage assignments needs to be evaluated. To efficiently determine the average packet delay for each assignment, we have developed a queuing model and derive an analytical solution. The analytical results are verified through simulation, and the gap between them is found to be negligible. Additionally, the simulation results demonstrate the algorithm's superiority in handling load imbalances among NFs. The algorithm efficiently assigns stages such that, for a set of NFCs with a constant total input rate, altering the distribution of arrival rates among the NFCs results in similar average delays. The experimental instances indicate that the variation in delay remains within 8 % after altering the arrival rate distribution among the NFCs. Furthermore, we implemented a benchmark named “Equal Stage Assignment” in which each NF is assigned an equal number of stages. Compared to the Equal Stage Assignment algorithm, the proposed stage assignment algorithm can reduce the average delay by more than 20 %, particularly in cases where the loads between NFs are imbalanced.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104295"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144913116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bitcoin attacks: A comprehensive study 比特币攻击:一项综合研究
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-08-27 DOI: 10.1016/j.jnca.2025.104297
Arieb Ashraf Sofi , Ajaz Hussain Mir , Zamrooda Jabeen
Bitcoin, a widely recognized cryptocurrency, embodies features such as anonymity and decentralization. In the Bitcoin network, transactions are propagated between peers using a distributed database called a blockchain. To facilitate confidence in transactions, there is hardly any tolerance for attacks. However, the possibility of attacks exists. This paper is an attempt to examine different attacks targeting the Bitcoin network, encompassing their vulnerabilities, repercussions, and countermeasures. Various forms of attacks that might target the Bitcoin network are presented, with an exploration of their interconnections. The analysis delves into the intricacies of Bitcoin’s decentralized architecture, emphasizing the criticality of network security in maintaining its integrity.
比特币是一种被广泛认可的加密货币,具有匿名和去中心化等特点。在比特币网络中,交易使用名为区块链的分布式数据库在对等体之间传播。为了促进对交易的信任,几乎不允许攻击。然而,攻击的可能性是存在的。本文试图研究针对比特币网络的不同攻击,包括其漏洞,后果和对策。介绍了可能针对比特币网络的各种形式的攻击,并探讨了它们之间的相互联系。该分析深入研究了比特币分散架构的复杂性,强调了网络安全在保持其完整性方面的重要性。
{"title":"Bitcoin attacks: A comprehensive study","authors":"Arieb Ashraf Sofi ,&nbsp;Ajaz Hussain Mir ,&nbsp;Zamrooda Jabeen","doi":"10.1016/j.jnca.2025.104297","DOIUrl":"10.1016/j.jnca.2025.104297","url":null,"abstract":"<div><div>Bitcoin, a widely recognized cryptocurrency, embodies features such as anonymity and decentralization. In the Bitcoin network, transactions are propagated between peers using a distributed database called a blockchain. To facilitate confidence in transactions, there is hardly any tolerance for attacks. However, the possibility of attacks exists. This paper is an attempt to examine different attacks targeting the Bitcoin network, encompassing their vulnerabilities, repercussions, and countermeasures. Various forms of attacks that might target the Bitcoin network are presented, with an exploration of their interconnections. The analysis delves into the intricacies of Bitcoin’s decentralized architecture, emphasizing the criticality of network security in maintaining its integrity.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104297"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DBASC: Decentralized blockchain-based architecture with integration of smart contracts for secure communication in VANETs DBASC:基于区块链的去中心化架构,在vanet中集成了用于安全通信的智能合约
IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-08-23 DOI: 10.1016/j.jnca.2025.104294
Righa Tandon , Neeraj Sharma
The need for secure and effective communication in vehicular networks is fundamental to the successful development of connected and autonomous vehicles. In this research, we propose a decentralized blockchain-based architecture for vehicular authentication and message exchange, using smart contracts to increase trust, and security. The process of authentication and the transferring of data has been intentionally segregated into blockchains and designed with regard to two types of velocities that have the ability to impact them, tailored to their specific functions and operational dynamics. The authentication blockchain uses a lightweight consensus mechanism specific to quickly verify the identity of entities, helping to form mutual trust. In contrast, the communication blockchain uses a heavier consensus mechanism to ensure integrity and traceability of the messages within the network. This means that there are two different types of consensus mechanisms on each of the blockchains; one which has a low concern for security to authenticate the ‘fact of identity’, and the second, which securely (i.e. conspirators accountability) maintain integrity and assurance for the contents of the messages shared on the second blockchain. The segregation of functionality thus had the effect of improving the performance and scalability of the entire network, and posing an additional layer of security, minimizing the attack surface, and reducing the complexity of consensus per blockchain. The smart contract form of the dual blockchain architecture successfully gives a secure, efficient, and scalable means of managing vehicle communication in decentralized intelligent transportation systems.
车辆网络中安全有效的通信需求是成功开发联网和自动驾驶汽车的基础。在这项研究中,我们提出了一种分散的基于区块链的架构,用于车辆身份验证和消息交换,使用智能合约来增加信任和安全性。身份验证和数据传输的过程被有意地分离到区块链中,并根据能够影响它们的两种类型的速度进行设计,根据它们的特定功能和操作动态进行定制。身份验证区块链使用特定的轻量级共识机制来快速验证实体的身份,有助于形成相互信任。相比之下,通信区块链使用更严格的共识机制来确保网络中消息的完整性和可追溯性。这意味着每个区块链上都有两种不同类型的共识机制;一个对验证“身份事实”的安全性关注较低,另一个则安全地(即阴谋者问责制)维护第二个区块链上共享的消息内容的完整性和保证。因此,功能隔离具有提高整个网络的性能和可扩展性的效果,并提供了额外的安全层,最大限度地减少了攻击面,并降低了每个区块链共识的复杂性。双区块链架构的智能合约形式成功地为分散智能交通系统中的车辆通信管理提供了一种安全、高效和可扩展的方式。
{"title":"DBASC: Decentralized blockchain-based architecture with integration of smart contracts for secure communication in VANETs","authors":"Righa Tandon ,&nbsp;Neeraj Sharma","doi":"10.1016/j.jnca.2025.104294","DOIUrl":"10.1016/j.jnca.2025.104294","url":null,"abstract":"<div><div>The need for secure and effective communication in vehicular networks is fundamental to the successful development of connected and autonomous vehicles. In this research, we propose a decentralized blockchain-based architecture for vehicular authentication and message exchange, using smart contracts to increase trust, and security. The process of authentication and the transferring of data has been intentionally segregated into blockchains and designed with regard to two types of velocities that have the ability to impact them, tailored to their specific functions and operational dynamics. The authentication blockchain uses a lightweight consensus mechanism specific to quickly verify the identity of entities, helping to form mutual trust. In contrast, the communication blockchain uses a heavier consensus mechanism to ensure integrity and traceability of the messages within the network. This means that there are two different types of consensus mechanisms on each of the blockchains; one which has a low concern for security to authenticate the ‘fact of identity’, and the second, which securely (i.e. conspirators accountability) maintain integrity and assurance for the contents of the messages shared on the second blockchain. The segregation of functionality thus had the effect of improving the performance and scalability of the entire network, and posing an additional layer of security, minimizing the attack surface, and reducing the complexity of consensus per blockchain. The smart contract form of the dual blockchain architecture successfully gives a secure, efficient, and scalable means of managing vehicle communication in decentralized intelligent transportation systems.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"243 ","pages":"Article 104294"},"PeriodicalIF":8.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144898691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1