首页 > 最新文献

Computer Communications最新文献

英文 中文
META: Multi-classified encrypted traffic anomaly detection with fine-grained flow and interaction analysis META:具有细粒度流和交互分析的多分类加密流量异常检测
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-30 DOI: 10.1016/j.comcom.2025.108333
Boyu Kuang , Yuchi Chen , Yansong Gao , Yaqian Xu , Anmin Fu , Willy Susilo
The pervasive implementation of encryption mechanisms has introduced considerable obstacles to anomalous traffic detection, rendering conventional attack detection methodologies that rely on packet payload characteristics ineffectual. In the absence of plaintext information, current anomaly encrypted traffic detection mainly relies on traffic data analysis to identify and characterize anomalous attack patterns in encrypted traffic, employing machine learning or deep learning models. However, the existing methods still suffer from limited detection capabilities, especially the ability to classify multi-class attacks due to insufficient internal and external features. In this paper, we propose a Multi-classified Encrypted Traffic Anomaly Detection (META) method. META refines and extends the available feature dimensions in encrypted traffic by leveraging two key aspects: the internal interaction behavior information within the traffic and the external interaction behavior information in network topology. Specifically, an in-depth examination of the internal packet interaction features is undertaken, resulting in a novel feature set, designated as META-Features, encompassing 278 fine-grained statistical features. Furthermore, a Graph Neural Network (GNN) is employed to learn the external interaction behavior in the network topology from the embedding of the IP node graph and flow edge graph. The results of the experiments demonstrate that the refined feature set META-Features significantly enhances the model’s detection capabilities. Thereby, the META-GNN model exhibits superior performance compared to the traditional approaches, with an accuracy of 91.90% and an F1-score of 87.41%.
加密机制的普遍实现为异常流量检测带来了相当大的障碍,使得依赖数据包有效负载特征的传统攻击检测方法无效。在没有明文信息的情况下,目前的异常加密流量检测主要依靠流量数据分析来识别和表征加密流量中的异常攻击模式,采用机器学习或深度学习模型。但是,现有方法的检测能力仍然有限,特别是由于内部和外部特征不足,无法对多类攻击进行分类。本文提出了一种多分类加密流量异常检测(META)方法。META通过利用流量内部交互行为信息和网络拓扑中的外部交互行为信息这两个关键方面,对加密流量中可用的特征维度进行细化和扩展。具体来说,对内部数据包交互特征进行了深入的检查,产生了一个新的特征集,称为META-Features,包含278个细粒度统计特征。此外,利用图神经网络(GNN)从IP节点图和流边图的嵌入中学习网络拓扑中的外部交互行为。实验结果表明,改进后的META-Features特征集显著提高了模型的检测能力。因此,META-GNN模型的准确率为91.90%,f1得分为87.41%,优于传统方法。
{"title":"META: Multi-classified encrypted traffic anomaly detection with fine-grained flow and interaction analysis","authors":"Boyu Kuang ,&nbsp;Yuchi Chen ,&nbsp;Yansong Gao ,&nbsp;Yaqian Xu ,&nbsp;Anmin Fu ,&nbsp;Willy Susilo","doi":"10.1016/j.comcom.2025.108333","DOIUrl":"10.1016/j.comcom.2025.108333","url":null,"abstract":"<div><div>The pervasive implementation of encryption mechanisms has introduced considerable obstacles to anomalous traffic detection, rendering conventional attack detection methodologies that rely on packet payload characteristics ineffectual. In the absence of plaintext information, current anomaly encrypted traffic detection mainly relies on traffic data analysis to identify and characterize anomalous attack patterns in encrypted traffic, employing machine learning or deep learning models. However, the existing methods still suffer from limited detection capabilities, especially the ability to classify multi-class attacks due to insufficient internal and external features. In this paper, we propose a Multi-classified Encrypted Traffic Anomaly Detection (META) method. META refines and extends the available feature dimensions in encrypted traffic by leveraging two key aspects: the internal interaction behavior information within the traffic and the external interaction behavior information in network topology. Specifically, an in-depth examination of the internal packet interaction features is undertaken, resulting in a novel feature set, designated as META-Features, encompassing 278 fine-grained statistical features. Furthermore, a Graph Neural Network (GNN) is employed to learn the external interaction behavior in the network topology from the embedding of the IP node graph and flow edge graph. The results of the experiments demonstrate that the refined feature set META-Features significantly enhances the model’s detection capabilities. Thereby, the META-GNN model exhibits superior performance compared to the traditional approaches, with an accuracy of 91.90% and an F1-score of 87.41%.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108333"},"PeriodicalIF":4.3,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating scalability of median-based ADR under different mobility conditions 评估不同移动条件下基于中值的ADR的可扩展性
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-24 DOI: 10.1016/j.comcom.2025.108322
Geraldo A. Sarmento Neto , Thiago A.R. da Silva , Artur F. da S. Veloso , Pedro Felipe de Abreu , Luis H. de O. Mendes , J. Valdemir dos Reis Jr
The LoRaWAN protocol is widely used in Internet of Things (IoT) applications due to its ability to provide long-range, low-power communication. The Adaptive Data Rate (ADR) mechanism dynamically adjusts transmission parameters to optimize energy consumption. However, ADR is primarily designed for static devices, which limits its effectiveness in mobile environments, where fluctuating signal conditions can degrade performance. To address this limitation, the Median-Based ADR (MB-ADR) scheme was introduced, leveraging statistical measures to improve ADR adaptability to changing channel conditions. This study evaluates the scalability of MB-ADR in networks with up to 1,000 end devices and node speeds of up to 20 m/s, considering mobility models such as Random Walk and Gauss–Markov. The results show that MB-ADR demonstrates superior performance in scenarios with realistic mobility patterns, particularly under the Steady-State Random Waypoint model, resulting in improvements of up to 15% in Packet Delivery Ratio (PDR) and 55% in energy efficiency compared to a Kalman filter-based scheme under the same mobility model. Additionally, the analysis demonstrates the effectiveness of MB-ADR in improving throughput and reducing collisions by promoting an efficient distribution of spreading factors. Overall, the study confirms the potential of MB-ADR to enhance communication reliability and energy efficiency in mobile IoT networks, making it a viable solution for large-scale, high-density IoT deployments with variable mobility.
LoRaWAN协议广泛应用于物联网(IoT)应用,因为它能够提供远程、低功耗的通信。自适应数据速率(ADR)机制动态调整传输参数以优化能耗。然而,ADR主要是为静态设备设计的,这限制了其在移动环境中的有效性,在移动环境中,波动的信号条件会降低性能。为了解决这一限制,引入了基于中位数的ADR (MB-ADR)方案,利用统计措施来提高ADR对不断变化的信道条件的适应性。本研究评估了MB-ADR在多达1000个终端设备、节点速度高达20m /s的网络中的可扩展性,考虑了随机漫步和高斯-马尔可夫等移动模型。结果表明,MB-ADR在具有实际移动模式的场景中表现出优异的性能,特别是在稳态随机路点模型下,与基于卡尔曼滤波的方案相比,在相同的移动模型下,分组投递率(PDR)提高了15%,能源效率提高了55%。此外,分析还证明了MB-ADR通过促进传播因子的有效分布,在提高吞吐量和减少冲突方面的有效性。总体而言,该研究证实了MB-ADR在提高移动物联网网络通信可靠性和能效方面的潜力,使其成为具有可变移动性的大规模高密度物联网部署的可行解决方案。
{"title":"Evaluating scalability of median-based ADR under different mobility conditions","authors":"Geraldo A. Sarmento Neto ,&nbsp;Thiago A.R. da Silva ,&nbsp;Artur F. da S. Veloso ,&nbsp;Pedro Felipe de Abreu ,&nbsp;Luis H. de O. Mendes ,&nbsp;J. Valdemir dos Reis Jr","doi":"10.1016/j.comcom.2025.108322","DOIUrl":"10.1016/j.comcom.2025.108322","url":null,"abstract":"<div><div>The LoRaWAN protocol is widely used in Internet of Things (IoT) applications due to its ability to provide long-range, low-power communication. The Adaptive Data Rate (ADR) mechanism dynamically adjusts transmission parameters to optimize energy consumption. However, ADR is primarily designed for static devices, which limits its effectiveness in mobile environments, where fluctuating signal conditions can degrade performance. To address this limitation, the Median-Based ADR (MB-ADR) scheme was introduced, leveraging statistical measures to improve ADR adaptability to changing channel conditions. This study evaluates the scalability of MB-ADR in networks with up to 1,000 end devices and node speeds of up to 20 m/s, considering mobility models such as Random Walk and Gauss–Markov. The results show that MB-ADR demonstrates superior performance in scenarios with realistic mobility patterns, particularly under the Steady-State Random Waypoint model, resulting in improvements of up to 15% in Packet Delivery Ratio (PDR) and 55% in energy efficiency compared to a Kalman filter-based scheme under the same mobility model. Additionally, the analysis demonstrates the effectiveness of MB-ADR in improving throughput and reducing collisions by promoting an efficient distribution of spreading factors. Overall, the study confirms the potential of MB-ADR to enhance communication reliability and energy efficiency in mobile IoT networks, making it a viable solution for large-scale, high-density IoT deployments with variable mobility.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108322"},"PeriodicalIF":4.3,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting the problem of optimizing spreading factor allocations in LoRaWAN: From theory to practice 再论LoRaWAN中扩散因子分配的优化问题:从理论到实践
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-23 DOI: 10.1016/j.comcom.2025.108321
Dimitrios Zorbas , Aruzhan Sabyrbek , Luigi Di Puglia Pugliese
This paper revisits the problem of optimizing LoRa network success probability by proposing an optimized allocation strategy for Spreading Factors (SFs) under both uniform and Gaussian network deployments with a single or multiple gateways. More specifically, we solve the problem of finding the best SF allocations in dense network deployments whose EDs are first assigned with the minimum SF. Theoretical models are developed to quantify the success probability of transmissions, considering the capture effect as well as intra- and inter-SF interference. A mathematical optimization framework is introduced to determine the optimal SF distribution that maximizes the average probability of packet reception. The problem is solved using Mixed Integer Linear Programming (MILP), and then evaluated using simulations. Even though optimal SF allocation strategies have been proposed in the literature, no practical insights have been discovered and no real-world deployments have been considered. To this extent, the practical benefits of using improved or optimal SF settings are discovered in this paper. Simulation results confirm the theoretical findings while they demonstrate an up to 10 percentage points improvements in Packet Reception Ratio (PRR) in the real-world use-case.
本文通过提出一种具有单个或多个网关的均匀和高斯网络部署下的扩散因子(SFs)优化分配策略,重新研究了优化LoRa网络成功概率的问题。更具体地说,我们解决了在密集网络部署中寻找最佳SF分配的问题,这些网络部署的ed首先被分配最小SF。考虑到捕获效应以及sf内部和sf之间的干扰,建立了理论模型来量化传输的成功概率。引入数学优化框架来确定使分组接收平均概率最大化的最优SF分布。利用混合整数线性规划(MILP)方法求解了该问题,并用仿真对其进行了评价。尽管在文献中提出了最优的SF分配策略,但没有发现实际的见解,也没有考虑到实际的部署。在这种程度上,本文发现了使用改进或最佳SF设置的实际好处。仿真结果证实了理论发现,同时它们在实际用例中展示了高达10个百分点的数据包接收比(PRR)改进。
{"title":"Revisiting the problem of optimizing spreading factor allocations in LoRaWAN: From theory to practice","authors":"Dimitrios Zorbas ,&nbsp;Aruzhan Sabyrbek ,&nbsp;Luigi Di Puglia Pugliese","doi":"10.1016/j.comcom.2025.108321","DOIUrl":"10.1016/j.comcom.2025.108321","url":null,"abstract":"<div><div>This paper revisits the problem of optimizing LoRa network success probability by proposing an optimized allocation strategy for Spreading Factors (SFs) under both uniform and Gaussian network deployments with a single or multiple gateways. More specifically, we solve the problem of finding the best SF allocations in dense network deployments whose EDs are first assigned with the minimum SF. Theoretical models are developed to quantify the success probability of transmissions, considering the capture effect as well as intra- and inter-SF interference. A mathematical optimization framework is introduced to determine the optimal SF distribution that maximizes the average probability of packet reception. The problem is solved using Mixed Integer Linear Programming (MILP), and then evaluated using simulations. Even though optimal SF allocation strategies have been proposed in the literature, no practical insights have been discovered and no real-world deployments have been considered. To this extent, the practical benefits of using improved or optimal SF settings are discovered in this paper. Simulation results confirm the theoretical findings while they demonstrate an up to 10 percentage points improvements in Packet Reception Ratio (PRR) in the real-world use-case.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108321"},"PeriodicalIF":4.3,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond performance comparing the costs of applying Deep and Shallow Learning 除了性能比较应用深度学习和浅学习的成本
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-22 DOI: 10.1016/j.comcom.2025.108312
Rafael Teixeira, Leonardo Almeida, Pedro Rodrigues, Mário Antunes, Diogo Gomes, Rui L. Aguiar
The rapid growth of mobile network traffic and the emergence of complex applications, such as self-driving cars and augmented reality, demand ultra-low latency, high throughput, and massive device connectivity, which traditional network design approaches struggle to meet. These issues were initially addressed in Fifth-Generation (5G) and Beyond-5G (B5G) networks, where Artificial Intelligence (AI), particularly Deep Learning (DL), is proposed to optimize the network and to meet these demanding requirements. However, the resource constraints and time limitations inherent in telecommunication networks raise questions about the practicality of deploying large Deep Neural Networks (DNNs) in these contexts. This paper analyzes the costs of implementing DNNs by comparing them with shallow ML models across multiple datasets and evaluating factors such as execution time and model interpretability. Our findings demonstrate that shallow ML models offer comparable performance to DNNs, with significantly reduced training and inference times, achieving up to 90% acceleration. Moreover, shallow models are more interpretable, as explainability metrics struggle to agree on feature importance values even for high-performing DNNs.
移动网络流量的快速增长以及自动驾驶汽车和增强现实等复杂应用的出现,要求超低延迟、高吞吐量和大规模设备连接,这是传统网络设计方法难以满足的。这些问题最初是在第五代(5G)和超5G (B5G)网络中解决的,其中提出了人工智能(AI),特别是深度学习(DL)来优化网络并满足这些苛刻的要求。然而,电信网络固有的资源约束和时间限制对在这些环境中部署大型深度神经网络(dnn)的实用性提出了质疑。本文通过将dnn与跨多个数据集的浅ML模型进行比较,并评估执行时间和模型可解释性等因素,分析了实现dnn的成本。我们的研究结果表明,浅层机器学习模型提供了与dnn相当的性能,显著减少了训练和推理时间,实现了高达90%的加速。此外,浅模型更具可解释性,因为即使对于高性能dnn,可解释性指标也难以就特征重要性值达成一致。
{"title":"Beyond performance comparing the costs of applying Deep and Shallow Learning","authors":"Rafael Teixeira,&nbsp;Leonardo Almeida,&nbsp;Pedro Rodrigues,&nbsp;Mário Antunes,&nbsp;Diogo Gomes,&nbsp;Rui L. Aguiar","doi":"10.1016/j.comcom.2025.108312","DOIUrl":"10.1016/j.comcom.2025.108312","url":null,"abstract":"<div><div>The rapid growth of mobile network traffic and the emergence of complex applications, such as self-driving cars and augmented reality, demand ultra-low latency, high throughput, and massive device connectivity, which traditional network design approaches struggle to meet. These issues were initially addressed in Fifth-Generation (5G) and Beyond-5G (B5G) networks, where Artificial Intelligence (AI), particularly Deep Learning (DL), is proposed to optimize the network and to meet these demanding requirements. However, the resource constraints and time limitations inherent in telecommunication networks raise questions about the practicality of deploying large Deep Neural Networks (DNNs) in these contexts. This paper analyzes the costs of implementing DNNs by comparing them with shallow ML models across multiple datasets and evaluating factors such as execution time and model interpretability. Our findings demonstrate that shallow ML models offer comparable performance to DNNs, with significantly reduced training and inference times, achieving up to 90% acceleration. Moreover, shallow models are more interpretable, as explainability metrics struggle to agree on feature importance values even for high-performing DNNs.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108312"},"PeriodicalIF":4.3,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RIS-assisted LoRa networks with diversity: Impact of hardware impairments and phase noise 具有多样性的ris辅助LoRa网络:硬件损伤和相位噪声的影响
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-20 DOI: 10.1016/j.comcom.2025.108319
Thi-Phuong-Anh Hoang , Thien Huynh-The , Tien Hoa Nguyen , Trong-Thua Huynh , Nguyen-Son Vo , Lam-Thanh Tu
This paper investigates the performance of downlink LoRa networks assisted by reconfigurable intelligent surfaces (RIS) and diversity techniques. We derive closed-form expressions for the coverage probability (Pcov) under four scenarios: phase noise at the RIS only, hardware impairments at both the gateway and end devices (EDs), the combined effect of both impairments, and an ideal benchmark case. The analysis is carried out within a unified framework that is valid for any number of RIS elements, providing key insights into the influence of hardware impairment levels, gateway transmit power, and the diversity order as the number of RIS elements grows large. The results reveal that coverage probability improves with transmit power but deteriorates under more severe hardware impairments, while the diversity order scales directly with the number of RIS elements. Monte Carlo simulations validate the analytical findings and confirm that the ideal scenario achieves the best performance, followed in order by the phase noise, hardware impairment, and combined impairment cases.
本文研究了可重构智能面(RIS)和分集技术辅助下行LoRa网络的性能。我们推导了四种情况下覆盖概率(Pcov)的封闭表达式:仅RIS处的相位噪声、网关和终端设备(ed)处的硬件损伤、两种损伤的综合影响以及理想基准情况。该分析是在一个统一的框架内进行的,该框架适用于任何数量的RIS元素,提供了硬件损伤水平、网关传输功率以及RIS元素数量增加时的分集顺序的影响的关键见解。结果表明,覆盖概率随发射功率的增加而增加,但在更严重的硬件损伤下会下降,而分集顺序与RIS元素的数量成正比。蒙特卡罗模拟验证了分析结果,并确认理想情况下实现了最佳性能,其次是相位噪声、硬件损伤和综合损伤情况。
{"title":"RIS-assisted LoRa networks with diversity: Impact of hardware impairments and phase noise","authors":"Thi-Phuong-Anh Hoang ,&nbsp;Thien Huynh-The ,&nbsp;Tien Hoa Nguyen ,&nbsp;Trong-Thua Huynh ,&nbsp;Nguyen-Son Vo ,&nbsp;Lam-Thanh Tu","doi":"10.1016/j.comcom.2025.108319","DOIUrl":"10.1016/j.comcom.2025.108319","url":null,"abstract":"<div><div>This paper investigates the performance of downlink LoRa networks assisted by reconfigurable intelligent surfaces (RIS) and diversity techniques. We derive closed-form expressions for the coverage probability (Pcov) under four scenarios: phase noise at the RIS only, hardware impairments at both the gateway and end devices (EDs), the combined effect of both impairments, and an ideal benchmark case. The analysis is carried out within a unified framework that is valid for any number of RIS elements, providing key insights into the influence of hardware impairment levels, gateway transmit power, and the diversity order as the number of RIS elements grows large. The results reveal that coverage probability improves with transmit power but deteriorates under more severe hardware impairments, while the diversity order scales directly with the number of RIS elements. Monte Carlo simulations validate the analytical findings and confirm that the ideal scenario achieves the best performance, followed in order by the phase noise, hardware impairment, and combined impairment cases.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108319"},"PeriodicalIF":4.3,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy consumption optimization in UAV-assisted multi-layer mobile edge computing with active transmissive RIS 基于主动传输RIS的无人机辅助多层移动边缘计算能耗优化
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-19 DOI: 10.1016/j.comcom.2025.108320
Kexin Yang, Yaxi Liu, Boxin He, Jiahao Huo, Wei Huangfu
Unmanned Aerial Vehicle (UAV)-assisted edge computing provides low-latency and low-energy consumption computing capabilities for sparsely distributed Internet of Things (IoT) networks. In addition, the assisted UAVs provide line-of-sight links to further improve communication quality. However, the existing offloading strategies have low efficiency and high costs. Motivated by this, we propose a novel UAV-assisted multi-layer mobile edge computing network with active transmissive reconfigurable intelligent surface (RIS). The introduced an active transmissive RIS not only receives data from UAVs but also performs computing functionality. We establish an optimization to minimize the total system energy consumption under delay constraints by jointly planning UAV positions and allocating computing bits, sub-carriers, time slots, transmission power, and RIS transmission coefficient. To tackle this problem, we first use the block coordinate descent (BCD) algorithm to decouple it into four sub-problems. Then, we solve them by adopting successive convex approximation (SCA), difference-convex (DC) programming, and introducing slack variables. Experimental results demonstrate that the proposed network is superior to the other five baselines concerning energy consumption reduction. Also, the influences of system parameters are verified, including the number of IoT devices, the number of RIS elements, and the delay threshold.
无人机(UAV)辅助边缘计算为稀疏分布的物联网(IoT)网络提供低延迟、低能耗的计算能力。此外,辅助无人机提供视距链接,进一步提高通信质量。然而,现有的卸载策略效率低,成本高。基于此,我们提出了一种具有主动传输可重构智能表面(RIS)的新型无人机辅助多层移动边缘计算网络。主动传输RIS不仅可以接收来自无人机的数据,还可以执行计算功能。通过联合规划无人机位置、分配计算位、子载波、时隙、发射功率和RIS传输系数,建立了在时延约束下最小化系统总能耗的优化算法。为了解决这个问题,我们首先使用块坐标下降(BCD)算法将其解耦为四个子问题。然后采用逐次凸逼近(SCA)、差分凸规划(DC)和引入松弛变量等方法求解。实验结果表明,该网络在降低能耗方面优于其他5个基线。并验证了系统参数的影响,包括物联网设备的数量、RIS元素的数量和延迟阈值。
{"title":"Energy consumption optimization in UAV-assisted multi-layer mobile edge computing with active transmissive RIS","authors":"Kexin Yang,&nbsp;Yaxi Liu,&nbsp;Boxin He,&nbsp;Jiahao Huo,&nbsp;Wei Huangfu","doi":"10.1016/j.comcom.2025.108320","DOIUrl":"10.1016/j.comcom.2025.108320","url":null,"abstract":"<div><div>Unmanned Aerial Vehicle (UAV)-assisted edge computing provides low-latency and low-energy consumption computing capabilities for sparsely distributed Internet of Things (IoT) networks. In addition, the assisted UAVs provide line-of-sight links to further improve communication quality. However, the existing offloading strategies have low efficiency and high costs. Motivated by this, we propose a novel UAV-assisted multi-layer mobile edge computing network with active transmissive reconfigurable intelligent surface (RIS). The introduced an active transmissive RIS not only receives data from UAVs but also performs computing functionality. We establish an optimization to minimize the total system energy consumption under delay constraints by jointly planning UAV positions and allocating computing bits, sub-carriers, time slots, transmission power, and RIS transmission coefficient. To tackle this problem, we first use the block coordinate descent (BCD) algorithm to decouple it into four sub-problems. Then, we solve them by adopting successive convex approximation (SCA), difference-convex (DC) programming, and introducing slack variables. Experimental results demonstrate that the proposed network is superior to the other five baselines concerning energy consumption reduction. Also, the influences of system parameters are verified, including the number of IoT devices, the number of RIS elements, and the delay threshold.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108320"},"PeriodicalIF":4.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145120973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel-hopping sequence generation for blind rendezvous in cognitive radio-enabled internet of vehicles: A multi-agent twin delayed deep deterministic policy gradient-based method 基于认知无线电的车联网盲交会跳信道序列生成:基于多智能体双延迟深度确定性策略梯度的方法
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-19 DOI: 10.1016/j.comcom.2025.108318
Mehri Asadi Vasfi, Behrouz Shahgholi Ghahfarokhi
Efficient spectrum utilization is a major challenge in highly dynamic vehicular environments due to the scarcity of spectrum resources. Cognitive Radio (CR) has emerged as a solution to improve spectrum utilization by enabling opportunistic access in IoV. In this context, channel-hopping based blind rendezvous offers a practical approach for decentralized spectrum access in CR-enabled IoV (CR-IoV). This paper presents a novel Multi-Agent Twin Delayed Deep Deterministic Policy Gradient (MATD3PG)-based strategy for generating channel sequences in channel-hopping-based blind rendezvous. Unlike existing methods that overlook the quality of licensed spectrum, our approach ensures spectrum efficiency and QoS awareness in dynamic channel sequence generation. We formulate the channel sequence selection problem as a multi-objective optimization, aiming to maximize spectrum efficiency and minimize Time-To-Rendezvous (TTR) while meeting stringent latency and reliability requirements for vehicular communications. Each vehicle independently generates a channel-hopping sequence using a learning agent, which considers key channel quality metrics such as availability, reliability, and capacity. The generated sequences are employed in an asynchronous and asymmetric blind rendezvous process, enhancing adaptability to dynamic network conditions. Simulation results demonstrate that the proposed method significantly outperforms existing approaches, including Enhanced Jump-Stay (EJS), Single-radio Sunflower Set (SSS), Zero-type, One-type, and S-type (ZOS), Multi-Agent Q-Learning based Rendezvous (MAQLR), Exponential-weight algorithm for Exploration and Exploitation (Exp3), and Reinforcement Learning-based Channel-Hopping Rendezvous (RLCH) in terms of Expected TTR (ETTR), Maximum TTR (MTTR), delay, capacity, and reliability.
由于频谱资源的稀缺性,在高动态车辆环境下,高效利用频谱是一个主要挑战。认知无线电(CR)已成为一种通过实现车联网中的机会接入来提高频谱利用率的解决方案。在这种情况下,基于信道跳频的盲交会为CR-IoV (CR-IoV)中的分散频谱接入提供了一种实用的方法。提出了一种新的基于多智能体双延迟深度确定性策略梯度(MATD3PG)的信道序列生成策略。与忽略许可频谱质量的现有方法不同,我们的方法在动态信道序列生成中保证了频谱效率和QoS感知。我们将信道序列选择问题描述为一个多目标优化问题,旨在最大限度地提高频谱效率和最小化时间到交会(TTR),同时满足车辆通信严格的延迟和可靠性要求。每辆车都使用学习代理独立地生成一个信道跳变序列,学习代理考虑关键的信道质量指标,如可用性、可靠性和容量。生成的序列被用于异步和非对称的盲交会过程,增强了对动态网络条件的适应性。仿真结果表明,该方法在期望TTR (ETTR)、最大TTR (MTTR)、延迟、容量和可靠性方面显著优于现有方法,包括增强型跳-停留(EJS)、单无线电Sunflower Set (SSS)、零型、一型和s型(ZOS)、基于多智能体q -学习的集合(MAQLR)、指数加权探索和开发算法(Exp3)和基于强化学习的信道跳集(RLCH)。
{"title":"Channel-hopping sequence generation for blind rendezvous in cognitive radio-enabled internet of vehicles: A multi-agent twin delayed deep deterministic policy gradient-based method","authors":"Mehri Asadi Vasfi,&nbsp;Behrouz Shahgholi Ghahfarokhi","doi":"10.1016/j.comcom.2025.108318","DOIUrl":"10.1016/j.comcom.2025.108318","url":null,"abstract":"<div><div>Efficient spectrum utilization is a major challenge in highly dynamic vehicular environments due to the scarcity of spectrum resources. Cognitive Radio (CR) has emerged as a solution to improve spectrum utilization by enabling opportunistic access in IoV. In this context, channel-hopping based blind rendezvous offers a practical approach for decentralized spectrum access in CR-enabled IoV (CR-IoV). This paper presents a novel Multi-Agent Twin Delayed Deep Deterministic Policy Gradient (MATD3PG)-based strategy for generating channel sequences in channel-hopping-based blind rendezvous. Unlike existing methods that overlook the quality of licensed spectrum, our approach ensures spectrum efficiency and QoS awareness in dynamic channel sequence generation. We formulate the channel sequence selection problem as a multi-objective optimization, aiming to maximize spectrum efficiency and minimize Time-To-Rendezvous (TTR) while meeting stringent latency and reliability requirements for vehicular communications. Each vehicle independently generates a channel-hopping sequence using a learning agent, which considers key channel quality metrics such as availability, reliability, and capacity. The generated sequences are employed in an asynchronous and asymmetric blind rendezvous process, enhancing adaptability to dynamic network conditions. Simulation results demonstrate that the proposed method significantly outperforms existing approaches, including Enhanced Jump-Stay (EJS), Single-radio Sunflower Set (SSS), Zero-type, One-type, and S-type (ZOS), Multi-Agent Q-Learning based Rendezvous (MAQLR), Exponential-weight algorithm for Exploration and Exploitation (Exp3), and Reinforcement Learning-based Channel-Hopping Rendezvous (RLCH) in terms of Expected TTR (ETTR), Maximum TTR (MTTR), delay, capacity, and reliability.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108318"},"PeriodicalIF":4.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ScaleIP: A hybrid autoscaling of VoIP services based on deep reinforcement learning ScaleIP:基于深度强化学习的VoIP服务混合自动扩展
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-19 DOI: 10.1016/j.comcom.2025.108314
Zahra Najafabadi Samani , Juan Aznar Poveda , Dominik Gratz , Rene Hueber , Philipp Kalb , Thomas Fahringer
Adaptive resource provisioning has become crucial for cloud-based applications, especially those managing real-time traffic like Voice over IP (VoIP), which experience rapidly fluctuating workloads. Traditional static provisioning methods often fall short in these dynamic environments, leading to inefficiencies and potential service disruptions. Existing solutions struggle to maintain performance under varying traffic conditions, particularly for time-sensitive applications. This paper introduces ScaleIP, a hybrid autoscaling solution for containerized VoIP services that offers real-time adaptability and efficient resource management. ScaleIP leverages Deep Reinforcement Learning to make dynamic and efficient scaling decisions, improving call latency, increasing the number of successfully routed calls, and maximizing resource utilization. We evaluated ScaleIP through extensive experiments conducted on a real testbed utilizing the customer Call Detail Record (CDR) from 2023 provided by World Direct, encompassing over 89 million calls. The results show that ScaleIP consistently maintains call latency below 2 s, increases the number of successfully routed calls by 3.26 ×, and increases the resource utilization up to 60 % compared to state-of-the-art autoscaling methods.
自适应资源配置对于基于云的应用程序已经变得至关重要,尤其是那些管理实时流量(如IP语音(VoIP))的应用程序,这些应用程序会经历快速波动的工作负载。传统的静态供应方法在这些动态环境中往往无法满足需求,从而导致效率低下和潜在的服务中断。现有的解决方案很难在不同的流量条件下保持性能,特别是对于时间敏感的应用程序。本文介绍了ScaleIP,这是一种用于容器化VoIP服务的混合自动扩展解决方案,提供了实时适应性和高效的资源管理。ScaleIP利用深度强化学习来做出动态和有效的扩展决策,改善呼叫延迟,增加成功路由呼叫的数量,并最大限度地提高资源利用率。我们利用World Direct提供的2023年的客户呼叫详细记录(CDR),在真实的测试平台上进行了广泛的实验,对ScaleIP进行了评估,其中包括8900多万次呼叫。结果表明,与最先进的自动缩放方法相比,ScaleIP始终将呼叫延迟保持在2秒以下,将成功路由呼叫的数量增加了3.26倍,并将资源利用率提高了60%。
{"title":"ScaleIP: A hybrid autoscaling of VoIP services based on deep reinforcement learning","authors":"Zahra Najafabadi Samani ,&nbsp;Juan Aznar Poveda ,&nbsp;Dominik Gratz ,&nbsp;Rene Hueber ,&nbsp;Philipp Kalb ,&nbsp;Thomas Fahringer","doi":"10.1016/j.comcom.2025.108314","DOIUrl":"10.1016/j.comcom.2025.108314","url":null,"abstract":"<div><div>Adaptive resource provisioning has become crucial for cloud-based applications, especially those managing real-time traffic like Voice over IP (VoIP), which experience rapidly fluctuating workloads. Traditional static provisioning methods often fall short in these dynamic environments, leading to inefficiencies and potential service disruptions. Existing solutions struggle to maintain performance under varying traffic conditions, particularly for time-sensitive applications. This paper introduces ScaleIP, a hybrid autoscaling solution for containerized VoIP services that offers real-time adaptability and efficient resource management. ScaleIP leverages Deep Reinforcement Learning to make dynamic and efficient scaling decisions, improving call latency, increasing the number of successfully routed calls, and maximizing resource utilization. We evaluated ScaleIP through extensive experiments conducted on a real testbed utilizing the customer Call Detail Record (CDR) from 2023 provided by World Direct, encompassing over 89 million calls. The results show that ScaleIP consistently maintains call latency below 2<!--> <!-->s, increases the number of successfully routed calls by 3.26<!--> <!-->×, and increases the resource utilization up to 60<!--> <!-->% compared to state-of-the-art autoscaling methods.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108314"},"PeriodicalIF":4.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145120974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A preemptive task unloading scheme based on second optional unloading in cloud-fog collaborative networks 云雾协同网络中一种基于二次可选卸载的抢占式任务卸载方案
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-16 DOI: 10.1016/j.comcom.2025.108315
Yuan Zhao , Hongmin Gao , Shuaihua Liu
Long-distance data transmission between Internet of Things (IoT) devices and remote cloud center often leads to unacceptable latency for certain tasks. Fog computing has emerged as a promising solution for low-latency tasks. Consequently, the concept of cloud-fog collaborative networks has garnered significant attention. However, existing research primarily focuses on heterogeneous tasks, overlooking the crucial aspect of considering both task priority and second unloading. To address this gap, this paper proposes a novel task unloading scheme that concurrently takes preemptive priority and second optional unloading into account. In this scheme, delay-sensitive tasks (DSTs) are given preemptive priority over delay-tolerant tasks (DTTs). Furthermore, some DTTs may undergo preprocessing in the fog layer to optimize resource utilization. Moreover, tasks encountering blocking or preemption in the fog layer can also be secondarily unloaded to the cloud layer. In this framework, we devise a four-dimensional Markov chain (4DMC) to model and analyze this process. Through numerical experiments, we assess performance indicators under various parameters. Ultimately, our proposed strategy is compared with the unloading scheme that does not incorporate second unloading through both numerical analysis and simulation validation. The results indicate that our scheme notably enhances the throughput of DTTs, albeit at a marginal performance trade-off.
物联网(IoT)设备与远程云中心之间的长距离数据传输往往会导致某些任务无法接受的延迟。雾计算已经成为低延迟任务的一种很有前途的解决方案。因此,云雾协作网络的概念已经引起了极大的关注。然而,现有的研究主要集中在异构任务上,忽略了同时考虑任务优先级和二次卸载的关键方面。为了解决这一问题,本文提出了一种同时考虑抢占优先级和二次可选卸载的任务卸载方案。在该方案中,延迟敏感任务(dst)优先于延迟容忍任务(DTTs)。此外,一些dtt可能在雾层中进行预处理,以优化资源利用。此外,在雾层中遇到阻塞或抢占的任务也可以二次卸载到云层。在这个框架中,我们设计了一个四维马尔可夫链(4DMC)来建模和分析这一过程。通过数值实验,我们评估了不同参数下的性能指标。最后,通过数值分析和仿真验证,将本文提出的卸载策略与不考虑二次卸载的卸载方案进行了比较。结果表明,我们的方案显着提高了dts的吞吐量,尽管在边际性能权衡。
{"title":"A preemptive task unloading scheme based on second optional unloading in cloud-fog collaborative networks","authors":"Yuan Zhao ,&nbsp;Hongmin Gao ,&nbsp;Shuaihua Liu","doi":"10.1016/j.comcom.2025.108315","DOIUrl":"10.1016/j.comcom.2025.108315","url":null,"abstract":"<div><div>Long-distance data transmission between Internet of Things (IoT) devices and remote cloud center often leads to unacceptable latency for certain tasks. Fog computing has emerged as a promising solution for low-latency tasks. Consequently, the concept of cloud-fog collaborative networks has garnered significant attention. However, existing research primarily focuses on heterogeneous tasks, overlooking the crucial aspect of considering both task priority and second unloading. To address this gap, this paper proposes a novel task unloading scheme that concurrently takes preemptive priority and second optional unloading into account. In this scheme, delay-sensitive tasks (DSTs) are given preemptive priority over delay-tolerant tasks (DTTs). Furthermore, some DTTs may undergo preprocessing in the fog layer to optimize resource utilization. Moreover, tasks encountering blocking or preemption in the fog layer can also be secondarily unloaded to the cloud layer. In this framework, we devise a four-dimensional Markov chain (4DMC) to model and analyze this process. Through numerical experiments, we assess performance indicators under various parameters. Ultimately, our proposed strategy is compared with the unloading scheme that does not incorporate second unloading through both numerical analysis and simulation validation. The results indicate that our scheme notably enhances the throughput of DTTs, albeit at a marginal performance trade-off.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"243 ","pages":"Article 108315"},"PeriodicalIF":4.3,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145109055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scalable blockchain framework for IoT based on restaking and incentive mechanisms 基于股权和激励机制的物联网可扩展区块链框架
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-15 DOI: 10.1016/j.comcom.2025.108317
Fang Ye , Zitao Zhou , Yifan Wang , Yibing Li
This paper proposes a scalable blockchain framework based on sidechain solution for the Internet of Things (IoT). Considering the low-trust models of existing sidechains, we present a restaking-based trust aggregation method for Proof of Stake (PoS). By allowing mainchain validators to duplicate their stake on the sidechain network, we enhance the cryptoeconomics security of the sidechain while reducing costs. Given the potential conflicts between risks and rewards of trust aggregation, and the challenges posed by the heterogeneity of IoT devices for quantitative analysis, we propose an incentive analysis framework based on contract. By analyzing the optimal strategies of different risk-preference validators, design differentiated contracts to promote incentive-compatible outcomes. Additionally, we account for the uncertainty in the distribution of sidechain validators and discuss optimal configurations under various conditions. To address potential collusion attacks, we introduce a quantifiable exemption mechanism to limit the security risks. Finally, numerical simulations verify the feasibility and effectiveness of our proposed method.
提出了一种基于侧链的可扩展区块链框架的物联网解决方案。针对现有侧链的低信任模型,提出了一种基于再信任的PoS信任聚合方法。通过允许主链验证者在侧链网络上复制他们的权益,我们增强了侧链的加密经济安全性,同时降低了成本。考虑到信任聚合的风险与回报之间的潜在冲突,以及物联网设备异质性对定量分析带来的挑战,我们提出了一种基于契约的激励分析框架。通过分析不同风险偏好验证者的最优策略,设计差异化契约以促进激励相容的结果。此外,我们考虑了侧链验证器分布的不确定性,并讨论了各种条件下的最优配置。为了解决潜在的合谋攻击,我们引入了一个可量化的豁免机制来限制安全风险。最后,通过数值仿真验证了所提方法的可行性和有效性。
{"title":"A scalable blockchain framework for IoT based on restaking and incentive mechanisms","authors":"Fang Ye ,&nbsp;Zitao Zhou ,&nbsp;Yifan Wang ,&nbsp;Yibing Li","doi":"10.1016/j.comcom.2025.108317","DOIUrl":"10.1016/j.comcom.2025.108317","url":null,"abstract":"<div><div>This paper proposes a scalable blockchain framework based on sidechain solution for the Internet of Things (IoT). Considering the low-trust models of existing sidechains, we present a restaking-based trust aggregation method for Proof of Stake (PoS). By allowing mainchain validators to duplicate their stake on the sidechain network, we enhance the cryptoeconomics security of the sidechain while reducing costs. Given the potential conflicts between risks and rewards of trust aggregation, and the challenges posed by the heterogeneity of IoT devices for quantitative analysis, we propose an incentive analysis framework based on contract. By analyzing the optimal strategies of different risk-preference validators, design differentiated contracts to promote incentive-compatible outcomes. Additionally, we account for the uncertainty in the distribution of sidechain validators and discuss optimal configurations under various conditions. To address potential collusion attacks, we introduce a quantifiable exemption mechanism to limit the security risks. Finally, numerical simulations verify the feasibility and effectiveness of our proposed method.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"242 ","pages":"Article 108317"},"PeriodicalIF":4.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145095339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1