首页 > 最新文献

Computer Communications最新文献

英文 中文
Resource allocation for efficient AI inference in wireless sensing edge networks 无线传感边缘网络中高效人工智能推理的资源分配
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-13 DOI: 10.1016/j.comcom.2025.108363
Tanveer Ahmad , Asma Abbas Hassan Elnour , Muhammad Usman Hadi , Kiran Khurshid , Xue Jun Li , Weiwei Jiang
Integrating AI inference into wireless sensing edge networks presents notable challenges due to limited resources, changing environments, and diverse devices. In this study, we proposed a novel resource allocation framework that enhances energy efficiency, reduces latency, and ensures fairness across distributed edge nodes for AI inference. The framework models a multi-objective optimization problem that reflects the interdependence of computation, communication, and energy at each device. We also develop a decentralized algorithm based on dual decomposition and projected gradient ascent, by using local data. The extensive simulations demonstrate that our proposed method reduces the average inference latency by 31.4% and energy consumption by 27.8% compared to the greedy and round-robin techniques. The system utility is improved by up to 59.2%, and fairness, measured using Jain’s index, remains within 8% of the ideal. Additionally, throughput analysis further confirms that our approach gains up to 49 tasks/sec, outperforming existing strategies by more than 40%. These findings show that the resource-aware AI inference approach is scalable, energy-efficient, and appropriate for real-time use in multi-user wireless edge networks.
由于有限的资源、不断变化的环境和不同的设备,将人工智能推理集成到无线传感边缘网络中提出了显著的挑战。在这项研究中,我们提出了一种新的资源分配框架,该框架可以提高能源效率,减少延迟,并确保AI推理的分布式边缘节点之间的公平性。该框架模拟了一个多目标优化问题,反映了每个设备上计算、通信和能量的相互依赖。我们还通过使用局部数据开发了一种基于对偶分解和投影梯度上升的分散算法。大量的仿真表明,与贪婪算法和轮循算法相比,我们提出的方法平均减少了31.4%的推理延迟和27.8%的能量消耗。系统的效用提高了59.2%,使用Jain指数衡量的公平性仍然在理想的8%以内。此外,吞吐量分析进一步证实,我们的方法最多可获得49个任务/秒,比现有策略高出40%以上。这些发现表明,资源感知人工智能推理方法具有可扩展性、高能效,适合多用户无线边缘网络的实时使用。
{"title":"Resource allocation for efficient AI inference in wireless sensing edge networks","authors":"Tanveer Ahmad ,&nbsp;Asma Abbas Hassan Elnour ,&nbsp;Muhammad Usman Hadi ,&nbsp;Kiran Khurshid ,&nbsp;Xue Jun Li ,&nbsp;Weiwei Jiang","doi":"10.1016/j.comcom.2025.108363","DOIUrl":"10.1016/j.comcom.2025.108363","url":null,"abstract":"<div><div>Integrating AI inference into wireless sensing edge networks presents notable challenges due to limited resources, changing environments, and diverse devices. In this study, we proposed a novel resource allocation framework that enhances energy efficiency, reduces latency, and ensures fairness across distributed edge nodes for AI inference. The framework models a multi-objective optimization problem that reflects the interdependence of computation, communication, and energy at each device. We also develop a decentralized algorithm based on dual decomposition and projected gradient ascent, by using local data. The extensive simulations demonstrate that our proposed method reduces the average inference latency by 31.4% and energy consumption by 27.8% compared to the greedy and round-robin techniques. The system utility is improved by up to 59.2%, and fairness, measured using Jain’s index, remains within 8% of the ideal. Additionally, throughput analysis further confirms that our approach gains up to 49 tasks/sec, outperforming existing strategies by more than 40%. These findings show that the resource-aware AI inference approach is scalable, energy-efficient, and appropriate for real-time use in multi-user wireless edge networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"245 ","pages":"Article 108363"},"PeriodicalIF":4.3,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analytical-based resource allocation framework for NOMA-assisted Semi-ISAC systems 基于分析的noma辅助半isac系统资源分配框架
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-11 DOI: 10.1016/j.comcom.2025.108354
Dinh Van Tung , Thai-Hoc Vu , Nguyen Tien Hoa
Emerging 6G applications, such as autonomous systems and immersive extended reality, require joint communication and sensing to meet stringent performance demands. Integrated sensing and communication (ISAC) has thus emerged as a promising paradigm for supporting such dual functionality in future wireless networks. This paper proposes a novel optimization framework for joint spectrum and power allocation in semi-ISAC systems assisted by non-orthogonal multiple access (NOMA). The objective is to maximize the minimum ergodic achievable rate under statistical channel state information (CSI), thereby ensuring fairness across heterogeneous communication and sensing services. The non-convex problem is reformulated using successive convex approximation (SCA) for efficient and tractable optimization. Closed-form expressions for ergodic rates are derived under two NOMA configurations: single layer per sub-band and multiple layers per sub-band, highlighting the trade-off between decoding complexity and spectral efficiency. Numerical results highlight four key performance benefits: (i) a guaranteed minimum rate of 2 Gbps per user at 20 dBm transmit power, (ii) improved fairness based on Jain’s index, (iii) higher ergodic sum rate compared to benchmark schemes, and (iv) robustness to channel fading and target variations such as Nakagami-m parameters, sensing distance, and radar cross-section. These findings confirm the adaptability and efficiency of the proposed framework for dense deployment scenarios in semi-ISAC networks.
新兴的6G应用,如自主系统和沉浸式扩展现实,需要联合通信和传感来满足严格的性能要求。因此,集成传感和通信(ISAC)已成为在未来无线网络中支持这种双重功能的有前途的范例。提出了一种基于非正交多址(NOMA)辅助的半isac系统联合频谱和功率分配优化框架。目标是在统计信道状态信息(CSI)下最大化最小遍历可达速率,从而确保跨异构通信和感知服务的公平性。采用连续凸近似(SCA)对非凸问题进行了重新表述,以实现高效、可处理的优化。推导了两种NOMA配置下遍历率的封闭表达式:每子带单层和每子带多层,突出了解码复杂性和频谱效率之间的权衡。数值结果强调了四个关键的性能优势:(i)保证每用户在20dbm发射功率下的最小速率为2gbps, (ii)基于Jain指数的改进公平性,(iii)与基准方案相比,更高的遍历和速率,以及(iv)对信道衰落和目标变化(如Nakagami-m参数,传感距离和雷达截面)的鲁棒性。这些发现证实了所提出的框架在半isac网络密集部署场景中的适应性和效率。
{"title":"Analytical-based resource allocation framework for NOMA-assisted Semi-ISAC systems","authors":"Dinh Van Tung ,&nbsp;Thai-Hoc Vu ,&nbsp;Nguyen Tien Hoa","doi":"10.1016/j.comcom.2025.108354","DOIUrl":"10.1016/j.comcom.2025.108354","url":null,"abstract":"<div><div>Emerging 6G applications, such as autonomous systems and immersive extended reality, require joint communication and sensing to meet stringent performance demands. Integrated sensing and communication (ISAC) has thus emerged as a promising paradigm for supporting such dual functionality in future wireless networks. This paper proposes a novel optimization framework for joint spectrum and power allocation in semi-ISAC systems assisted by non-orthogonal multiple access (NOMA). The objective is to maximize the minimum ergodic achievable rate under statistical channel state information (CSI), thereby ensuring fairness across heterogeneous communication and sensing services. The non-convex problem is reformulated using successive convex approximation (SCA) for efficient and tractable optimization. Closed-form expressions for ergodic rates are derived under two NOMA configurations: single layer per sub-band and multiple layers per sub-band, highlighting the trade-off between decoding complexity and spectral efficiency. Numerical results highlight four key performance benefits: (i) a guaranteed minimum rate of 2 Gbps per user at 20 dBm transmit power, (ii) improved fairness based on Jain’s index, (iii) higher ergodic sum rate compared to benchmark schemes, and (iv) robustness to channel fading and target variations such as Nakagami-m parameters, sensing distance, and radar cross-section. These findings confirm the adaptability and efficiency of the proposed framework for dense deployment scenarios in semi-ISAC networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"245 ","pages":"Article 108354"},"PeriodicalIF":4.3,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145528997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial immune system-based congestion control routing for Satellite networks 基于人工免疫系统的卫星网络拥塞控制路由
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-07 DOI: 10.1016/j.comcom.2025.108353
Zhihan Yu, Li Zhang, Haoru Su, Wanting Zhu
The characteristics of Low Earth Orbit (LEO) satellite networks, including high-speed node mobility, dynamic topology changes, and limited resources, significantly complicate rapid network congestion resolution. To address this challenge, an Artificial Immune System-based Congestion Control Routing (AIS-CCR) algorithm is proposed. AIS-CCR emulates the operational mechanisms of biological immune systems by employing immune memory and learning mechanisms to store and reuse historical effective control strategies, thereby enhancing congestion response speed. The algorithm adopts virtual grid mapping combined with geographic routing to simplify the routing calculation process, achieving self-learning, self-adaptive, and distributed congestion control capabilities in satellite networks. Simulation experiments demonstrate that AIS-CCR outperforms comparable algorithms across key performance metrics, including response time, queue load rate, packet loss rate, and end-to-end delay. The algorithm exhibits particularly pronounced advantages when handling complex multi-link congestion scenarios.
低地球轨道(LEO)卫星网络具有节点高速移动、拓扑动态变化和资源有限等特点,这使得快速解决网络拥塞问题变得非常复杂。为了解决这一挑战,提出了一种基于人工免疫系统的拥塞控制路由(AIS-CCR)算法。AIS-CCR模拟生物免疫系统的运行机制,利用免疫记忆和学习机制来存储和重用历史有效控制策略,从而提高拥塞响应速度。该算法采用虚拟网格映射与地理路由相结合,简化了路由计算过程,实现了卫星网络的自学习、自适应和分布式拥塞控制能力。仿真实验表明,AIS-CCR在关键性能指标上优于可比算法,包括响应时间、队列负载率、数据包丢包率和端到端延迟。该算法在处理复杂的多链路拥塞场景时表现出特别明显的优势。
{"title":"Artificial immune system-based congestion control routing for Satellite networks","authors":"Zhihan Yu,&nbsp;Li Zhang,&nbsp;Haoru Su,&nbsp;Wanting Zhu","doi":"10.1016/j.comcom.2025.108353","DOIUrl":"10.1016/j.comcom.2025.108353","url":null,"abstract":"<div><div>The characteristics of Low Earth Orbit (LEO) satellite networks, including high-speed node mobility, dynamic topology changes, and limited resources, significantly complicate rapid network congestion resolution. To address this challenge, an Artificial Immune System-based Congestion Control Routing (AIS-CCR) algorithm is proposed. AIS-CCR emulates the operational mechanisms of biological immune systems by employing immune memory and learning mechanisms to store and reuse historical effective control strategies, thereby enhancing congestion response speed. The algorithm adopts virtual grid mapping combined with geographic routing to simplify the routing calculation process, achieving self-learning, self-adaptive, and distributed congestion control capabilities in satellite networks. Simulation experiments demonstrate that AIS-CCR outperforms comparable algorithms across key performance metrics, including response time, queue load rate, packet loss rate, and end-to-end delay. The algorithm exhibits particularly pronounced advantages when handling complex multi-link congestion scenarios.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108353"},"PeriodicalIF":4.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145555381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cache-assisted task offloading in Vehicular Edge Computing: A spatio-temporal deep reinforcement learning approach 车辆边缘计算中的缓存辅助任务卸载:一种时空深度强化学习方法
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-05 DOI: 10.1016/j.comcom.2025.108351
Xiguang Li , Junlong Li , Yunhe Sun , Ammar Muthanna , Ammar Hawbani , Liang Zhao
Vehicular Edge Computing (VEC) faces significant challenges in jointly managing caching and task offloading due to dynamic network conditions and resource constraints. This paper proposes a novel framework that addresses these challenges through a synergistic three-stage process. The innovation lies in the tight integration of our modules: first, a Spatio-Temporal Fast Graph Convolutional Network (ST-FGCN) accurately forecasts task demands by capturing complex spatio-temporal correlations. Second, these predictions guide a Prediction-Informed Edge Collaborative Caching (PIECC) algorithm to proactively optimize resource placement across edge servers. Finally, a Genetic Asynchronous Advantage Actor–Critic (GA3C) strategy performs robust task offloading within this optimized environment. Unlike traditional reinforcement learning methods that often struggle with the large state–action spaces in VEC and converge to local optima, our framework simplifies the decision process via predictive caching and enhances exploration with the GA-infused GA3C algorithm. Simulation results demonstrate that our proposed framework significantly reduces long-term system cost, outperforms baseline methods in both latency and energy efficiency, and offers a more adaptive solution for dynamic VEC systems.
由于动态网络条件和资源限制,车辆边缘计算(VEC)在联合管理缓存和任务卸载方面面临重大挑战。本文提出了一个新的框架,通过一个协同的三阶段过程来解决这些挑战。创新在于我们模块的紧密集成:首先,时空快速图卷积网络(ST-FGCN)通过捕获复杂的时空相关性来准确预测任务需求。其次,这些预测指导预测通知边缘协作缓存(PIECC)算法主动优化跨边缘服务器的资源放置。最后,遗传异步优势参与者-评论家(GA3C)策略在此优化环境中执行稳健的任务卸载。与传统的强化学习方法不同,我们的框架通过预测缓存简化了决策过程,并通过注入ga的GA3C算法增强了探索。传统的强化学习方法经常与VEC中的大型状态-动作空间作斗争并收敛到局部最优。仿真结果表明,我们提出的框架显著降低了长期系统成本,在延迟和能源效率方面优于基线方法,并为动态VEC系统提供了更具适应性的解决方案。
{"title":"Cache-assisted task offloading in Vehicular Edge Computing: A spatio-temporal deep reinforcement learning approach","authors":"Xiguang Li ,&nbsp;Junlong Li ,&nbsp;Yunhe Sun ,&nbsp;Ammar Muthanna ,&nbsp;Ammar Hawbani ,&nbsp;Liang Zhao","doi":"10.1016/j.comcom.2025.108351","DOIUrl":"10.1016/j.comcom.2025.108351","url":null,"abstract":"<div><div>Vehicular Edge Computing (VEC) faces significant challenges in jointly managing caching and task offloading due to dynamic network conditions and resource constraints. This paper proposes a novel framework that addresses these challenges through a synergistic three-stage process. The innovation lies in the tight integration of our modules: first, a Spatio-Temporal Fast Graph Convolutional Network (ST-FGCN) accurately forecasts task demands by capturing complex spatio-temporal correlations. Second, these predictions guide a Prediction-Informed Edge Collaborative Caching (PIECC) algorithm to proactively optimize resource placement across edge servers. Finally, a Genetic Asynchronous Advantage Actor–Critic (GA3C) strategy performs robust task offloading within this optimized environment. Unlike traditional reinforcement learning methods that often struggle with the large state–action spaces in VEC and converge to local optima, our framework simplifies the decision process via predictive caching and enhances exploration with the GA-infused GA3C algorithm. Simulation results demonstrate that our proposed framework significantly reduces long-term system cost, outperforms baseline methods in both latency and energy efficiency, and offers a more adaptive solution for dynamic VEC systems.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108351"},"PeriodicalIF":4.3,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Macroscopic diffusion prediction in social networks based on spatio-temporal and trend features 基于时空和趋势特征的社会网络宏观扩散预测
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-04 DOI: 10.1016/j.comcom.2025.108352
Xueqin Zhang , Yisong Lu , Gang Liu , Xiaowei Chen
Predicting the scale of information diffusion in social networks can sense the future propagation of information in advance, which plays a crucial role in controlling the diffusion of harmful information. We propose STTFP (Spatio-Temporal and Trend Features for Prediction), a deep learning framework that integrates temporal, spatial, and trend features to improve macroscopic diffusion prediction accuracy. This framework first utilizes graph attention networks to extract node interaction features from cascaded graphs. It captures node position features from diffusion sequences, and uses sparse matrix factorization to extract node features from social network graphs. Then it adopts bi-directional gated recurrent units and self-attention mechanisms to deeply mine spatio-temporal features. Additionally, we design an attention-based convolutional neural network to capture the short-term fluctuations in the information propagation process, while long short-term memory networks are used to uncover historical forwarding variation in diffusion scales. By fusing these features, the framework achieves incremental predictions of information diffusion. Experiments on three public datasets show that our method effectively enhances the accuracy of macroscopic diffusion predictions.
预测社交网络中信息扩散的规模,可以提前感知信息未来的传播,对控制有害信息的扩散起着至关重要的作用。我们提出了STTFP(时空和趋势特征预测),这是一个深度学习框架,集成了时间、空间和趋势特征,以提高宏观扩散预测的准确性。该框架首先利用图注意网络从级联图中提取节点交互特征。它从扩散序列中获取节点位置特征,并使用稀疏矩阵分解从社交网络图中提取节点特征。然后采用双向门控循环单元和自注意机制对时空特征进行深度挖掘。此外,我们设计了一个基于注意的卷积神经网络来捕捉信息传播过程中的短期波动,而使用长短期记忆网络来揭示扩散尺度上的历史转发变化。通过融合这些特征,该框架实现了对信息扩散的增量预测。在三个公开数据集上的实验表明,我们的方法有效地提高了宏观扩散预测的准确性。
{"title":"Macroscopic diffusion prediction in social networks based on spatio-temporal and trend features","authors":"Xueqin Zhang ,&nbsp;Yisong Lu ,&nbsp;Gang Liu ,&nbsp;Xiaowei Chen","doi":"10.1016/j.comcom.2025.108352","DOIUrl":"10.1016/j.comcom.2025.108352","url":null,"abstract":"<div><div>Predicting the scale of information diffusion in social networks can sense the future propagation of information in advance, which plays a crucial role in controlling the diffusion of harmful information. We propose STTFP (Spatio-Temporal and Trend Features for Prediction), a deep learning framework that integrates temporal, spatial, and trend features to improve macroscopic diffusion prediction accuracy. This framework first utilizes graph attention networks to extract node interaction features from cascaded graphs. It captures node position features from diffusion sequences, and uses sparse matrix factorization to extract node features from social network graphs. Then it adopts bi-directional gated recurrent units and self-attention mechanisms to deeply mine spatio-temporal features. Additionally, we design an attention-based convolutional neural network to capture the short-term fluctuations in the information propagation process, while long short-term memory networks are used to uncover historical forwarding variation in diffusion scales. By fusing these features, the framework achieves incremental predictions of information diffusion. Experiments on three public datasets show that our method effectively enhances the accuracy of macroscopic diffusion predictions.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108352"},"PeriodicalIF":4.3,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic distance-based load balancing in mobile edge computing with deep reinforcement learning 基于深度强化学习的移动边缘计算动态距离负载平衡
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-27 DOI: 10.1016/j.comcom.2025.108337
Mohammad Esmaeil Esmaeili , Ahmad Khonsari , Mahdi Dolati
Edge computing reduces latency by bringing computation closer to end devices, but the growing scale and heterogeneity of edge networks make resource management increasingly complex. Load balancing is essential for efficient resource use and low response times, yet static approaches struggle in dynamic environments. This calls for adaptable, data-driven load balancing methods that can continuously respond to changing conditions and optimize performance. This paper addresses the problem of load balancing in edge computing, where the distance between servers plays a critical role in performance. We propose two deep reinforcement learning (DRL)-based algorithms – Deep Q-Learning (DQL) and Long Short-Term Memory (LSTM) – that dynamically adjust the neighbor radius for load distribution in response to environmental changes. Unlike static approaches, our methods learn the radius online in a data-driven manner without requiring global coordination. Simulation results demonstrate that both algorithms adapt effectively to dynamic conditions. In scenarios with 80–100 edge servers and 500–1000 requests per second, DQL achieves up to 18% higher throughput, 21% lower average response time, and 23% lower blocking rate compared to recent methods, while LSTM remains competitive under stable workloads.
边缘计算通过使计算更接近终端设备来减少延迟,但是边缘网络不断增长的规模和异构性使得资源管理变得越来越复杂。负载平衡对于有效的资源使用和低响应时间至关重要,但是静态方法在动态环境中很难实现。这需要适应性强、数据驱动的负载平衡方法,这些方法可以不断响应不断变化的条件并优化性能。本文解决了边缘计算中的负载平衡问题,其中服务器之间的距离在性能中起着关键作用。我们提出了两种基于深度强化学习(DRL)的算法-深度q -学习(DQL)和长短期记忆(LSTM) -动态调整邻居半径以响应环境变化的负载分布。与静态方法不同,我们的方法以数据驱动的方式在线学习半径,而不需要全局协调。仿真结果表明,两种算法都能有效地适应动态环境。在拥有80-100个边缘服务器和每秒500-1000个请求的场景中,与最近的方法相比,DQL的吞吐量提高了18%,平均响应时间降低了21%,阻塞率降低了23%,而LSTM在稳定的工作负载下仍然具有竞争力。
{"title":"Dynamic distance-based load balancing in mobile edge computing with deep reinforcement learning","authors":"Mohammad Esmaeil Esmaeili ,&nbsp;Ahmad Khonsari ,&nbsp;Mahdi Dolati","doi":"10.1016/j.comcom.2025.108337","DOIUrl":"10.1016/j.comcom.2025.108337","url":null,"abstract":"<div><div>Edge computing reduces latency by bringing computation closer to end devices, but the growing scale and heterogeneity of edge networks make resource management increasingly complex. Load balancing is essential for efficient resource use and low response times, yet static approaches struggle in dynamic environments. This calls for adaptable, data-driven load balancing methods that can continuously respond to changing conditions and optimize performance. This paper addresses the problem of load balancing in edge computing, where the distance between servers plays a critical role in performance. We propose two deep reinforcement learning (DRL)-based algorithms – Deep Q-Learning (DQL) and Long Short-Term Memory (LSTM) – that dynamically adjust the neighbor radius for load distribution in response to environmental changes. Unlike static approaches, our methods learn the radius online in a data-driven manner without requiring global coordination. Simulation results demonstrate that both algorithms adapt effectively to dynamic conditions. In scenarios with 80–100 edge servers and 500–1000 requests per second, DQL achieves up to 18% higher throughput, 21% lower average response time, and 23% lower blocking rate compared to recent methods, while LSTM remains competitive under stable workloads.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108337"},"PeriodicalIF":4.3,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145419909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-antenna mobile charger scheduling optimization scheme for wireless rechargeable sensor networks 无线可充电传感器网络多天线移动充电器调度优化方案
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-21 DOI: 10.1016/j.comcom.2025.108343
Jinyi Li , Yong Feng , Nianbo Liu , Ming Liu , Yingna Li
Multi-antenna mobile chargers (MC) featuring directional multi-beam functionality present a promising solution for energy replenishment in wireless rechargeable sensor networks. However, existing multi-antenna scheduling schemes encounter challenges in jointly optimizing the coupled problem of Antenna Configuration and Path Planning (ACPP) while balancing MC’s coverage efficiency with energy consumption. To address this gap, this paper investigates the complex interdependencies and stringent constraints inherent in ACPP, and proposes a phased hybrid optimization scheme, PHMS-ACPP, integrating multi-objective optimization and deep reinforcement learning to compute approximate solutions. We first employ a modified Gaussian mixture model incorporating physical coverage constraints via the Expectation–Maximization algorithm to partition clusters, thereby reducing problem complexity. Within each cluster, the subproblem of determining optimal antenna count and orientation is solved using the Multi-objective Grey Wolf Optimizer to simultaneously optimize MC’s coverage efficiency and energy consumption. Then, we utilize Double Deep Q-Network to plan MC’s charging path across clusters, which captures long-term temporal dependencies between the evolution of nodes’ energy states and the spatial allocation of charging resources, enhancing both global scheduling efficacy and long-term charging efficiency. Extensive simulations demonstrate that PHMS-ACPP significantly outperforms state-of-the-art baselines in reducing node failure rate and minimizing average charging delay, with reductions of approximately 21.6% and 14.4%, respectively.
具有定向多波束功能的多天线移动充电器(MC)为无线可充电传感器网络中的能量补充提供了一种很有前途的解决方案。然而,现有的多天线调度方案在兼顾MC的覆盖效率和能量消耗的同时,面临着共同优化天线配置和路径规划(ACPP)耦合问题的挑战。为了解决这一差距,本文研究了ACPP固有的复杂相互依赖关系和严格约束,并提出了一种分阶段混合优化方案PHMS-ACPP,该方案结合多目标优化和深度强化学习来计算近似解。我们首先采用改进的高斯混合模型,通过期望最大化算法结合物理覆盖约束来划分集群,从而降低问题的复杂性。在每个簇内,利用多目标灰狼优化器求解确定最优天线数量和方向的子问题,同时优化MC的覆盖效率和能耗。在此基础上,利用Double Deep Q-Network对MC的充电路径进行规划,捕捉节点能量状态演化与充电资源空间分配之间的长期依赖关系,提高全局调度效率和长期充电效率。大量的仿真表明,PHMS-ACPP在降低节点故障率和最小化平均充电延迟方面明显优于最先进的基线,分别降低了约21.6%和14.4%。
{"title":"Multi-antenna mobile charger scheduling optimization scheme for wireless rechargeable sensor networks","authors":"Jinyi Li ,&nbsp;Yong Feng ,&nbsp;Nianbo Liu ,&nbsp;Ming Liu ,&nbsp;Yingna Li","doi":"10.1016/j.comcom.2025.108343","DOIUrl":"10.1016/j.comcom.2025.108343","url":null,"abstract":"<div><div>Multi-antenna mobile chargers (MC) featuring directional multi-beam functionality present a promising solution for energy replenishment in wireless rechargeable sensor networks. However, existing multi-antenna scheduling schemes encounter challenges in jointly optimizing the coupled problem of Antenna Configuration and Path Planning (ACPP) while balancing MC’s coverage efficiency with energy consumption. To address this gap, this paper investigates the complex interdependencies and stringent constraints inherent in ACPP, and proposes a phased hybrid optimization scheme, PHMS-ACPP, integrating multi-objective optimization and deep reinforcement learning to compute approximate solutions. We first employ a modified Gaussian mixture model incorporating physical coverage constraints via the Expectation–Maximization algorithm to partition clusters, thereby reducing problem complexity. Within each cluster, the subproblem of determining optimal antenna count and orientation is solved using the Multi-objective Grey Wolf Optimizer to simultaneously optimize MC’s coverage efficiency and energy consumption. Then, we utilize Double Deep Q-Network to plan MC’s charging path across clusters, which captures long-term temporal dependencies between the evolution of nodes’ energy states and the spatial allocation of charging resources, enhancing both global scheduling efficacy and long-term charging efficiency. Extensive simulations demonstrate that PHMS-ACPP significantly outperforms state-of-the-art baselines in reducing node failure rate and minimizing average charging delay, with reductions of approximately 21.6% and 14.4%, respectively.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108343"},"PeriodicalIF":4.3,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-agent deep reinforcement learning for service function chain deployment in software defined LEO satellite networks 软件定义LEO卫星网络业务功能链部署的多智能体深度强化学习
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-21 DOI: 10.1016/j.comcom.2025.108342
Pingduo Xu , Debin Wei , Jinglong Wen , Li Yang
Large-scale low earth orbit (LEO) satellite networks constitute a core component of future sixth-generation (6G) communication systems. To address the challenges of resource scarcity and highly dynamic topologies, the integration of software-defined networking (SDN) and network function virtualization (NFV) technologies into LEO satellite networks has become imperative. We proposes a hybrid centralized-distributed software-defined LEO satellite network architecture. Within this framework, This study focuses on the service function chain (SFC) deployment problem in LEO space-ground integrated networks. time-expanded graphs (TEGs) are employed to model satellite networks with dynamic topological variations, aiming to satisfy diverse user requirements while jointly optimizing resource consumption costs and service latency. The problem is formulated as a weighted sum minimization of resource consumption costs and service latency, and this problem is proven to be NP-complete. Subsequently, we integrate the twin delayed deep deterministic policy gradient method with multi-agent techniques to design a multi-agent deep reinforcement learning SFC deployment (MADRL-D) framework for optimizing our objectives. Experimental results demonstrate that the proposed MADRL-D framework outperforms existing alternatives in terms of resource utilization efficiency, resource consumption costs, and service latency.
大规模近地轨道卫星网络是未来第六代通信系统的核心组成部分。为了应对资源稀缺和高动态拓扑结构的挑战,将软件定义网络(SDN)和网络功能虚拟化(NFV)技术集成到LEO卫星网络中已势在必行。提出了一种混合式集中式分布式软件定义LEO卫星网络架构。在此框架下,本文重点研究了低轨道地空一体化网络中业务功能链(SFC)的部署问题。采用时间展开图(teg)对具有动态拓扑变化的卫星网络进行建模,以满足不同的用户需求,同时共同优化资源消耗成本和服务延迟。将该问题表述为资源消耗成本和服务延迟的加权和最小化,并证明了该问题是np完全的。随后,我们将双延迟深度确定性策略梯度方法与多智能体技术相结合,设计了一个多智能体深度强化学习SFC部署(MADRL-D)框架来优化我们的目标。实验结果表明,所提出的MADRL-D框架在资源利用效率、资源消耗成本和服务延迟方面都优于现有的替代框架。
{"title":"Multi-agent deep reinforcement learning for service function chain deployment in software defined LEO satellite networks","authors":"Pingduo Xu ,&nbsp;Debin Wei ,&nbsp;Jinglong Wen ,&nbsp;Li Yang","doi":"10.1016/j.comcom.2025.108342","DOIUrl":"10.1016/j.comcom.2025.108342","url":null,"abstract":"<div><div>Large-scale low earth orbit (LEO) satellite networks constitute a core component of future sixth-generation (6G) communication systems. To address the challenges of resource scarcity and highly dynamic topologies, the integration of software-defined networking (SDN) and network function virtualization (NFV) technologies into LEO satellite networks has become imperative. We proposes a hybrid centralized-distributed software-defined LEO satellite network architecture. Within this framework, This study focuses on the service function chain (SFC) deployment problem in LEO space-ground integrated networks. time-expanded graphs (TEGs) are employed to model satellite networks with dynamic topological variations, aiming to satisfy diverse user requirements while jointly optimizing resource consumption costs and service latency. The problem is formulated as a weighted sum minimization of resource consumption costs and service latency, and this problem is proven to be NP-complete. Subsequently, we integrate the twin delayed deep deterministic policy gradient method with multi-agent techniques to design a multi-agent deep reinforcement learning SFC deployment (MADRL-D) framework for optimizing our objectives. Experimental results demonstrate that the proposed MADRL-D framework outperforms existing alternatives in terms of resource utilization efficiency, resource consumption costs, and service latency.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108342"},"PeriodicalIF":4.3,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145340548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Operator coexistence in IRS-assisted mmWave networks: A wideband approach irs辅助毫米波网络中的运营商共存:一种宽带方法
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-20 DOI: 10.1016/j.comcom.2025.108341
Joana Angjo, Anatolij Zubow, Falko Dressler
In sixth generation (6G) mobile networks, the push towards high-frequency bands for ultra-fast data rates intensifies the challenges of signal attenuation and reduced coverage range. Intelligent reconfigurable surfaces (IRSs) present a promising solution to these challenges by enhancing signal coverage and directing reflections, which also contribute to minimize loss. However, there are multiple challenges associated with IRS, which have to be addressed before full incorporation of this technology into existing networks. A key issue arises from the inability of IRS to filter out non-target signals from other frequency bands due to lack of bandpass filtering. In areas where multiple wireless operators are spatially nearby, even if they use different frequency bands, this may cause unwanted reflections that may degrade their communication performances. To address this challenge, we previously proposed a solution, which relied on partitioning an IRS into sub-surfaces (sub_IRS) and dynamically assigning operators to these sub_IRS. Results have shown that a proper assignment of wireless operators to sub_IRS can improve the overall performance compared to a random assignment. In this paper, we introduce a wideband approach, demonstrating that the impact from unwanted reflections can be mitigated by using wideband channels, as the average signal to noise ratio (SNR) across subcarriers is less adversely affected. This approach leverages frequency diversity to reduce SNR variance, as some of the subcarriers may be negatively affected while others benefit, resulting in maintaining a more consistent and robust system performance in the presence of IRS-induced unwanted reflections. Simulations and real-world measurements confirm that the deployment of wideband IRS provides a robust strategy for combating inter-operator reflections in next generation IRS-assisted networks. Additionally, the wideband approach comes at no additional necessity for centralized resource control in future multi-operator networks. According to simulations, the SNR variance for a 1.28 GHz channel is approximately 20 dB lower than that of a 10 MHz channel when coexistence is considered. Similarly, measurements confirm a threefold reduction in SNR variation when transitioning from narrowband (10 MHz) to wideband (320 MHz) transmission. In overall, the usage of wideband channels in this context allows the system to be more stable and predictable.
在第六代(6G)移动网络中,向超高速数据速率的高频频段的推动加剧了信号衰减和覆盖范围缩小的挑战。智能可重构表面(IRSs)通过增强信号覆盖和定向反射,为这些挑战提供了一个有希望的解决方案,这也有助于最大限度地减少损耗。然而,与IRS相关的诸多挑战必须在将该技术完全纳入现有网络之前得到解决。由于缺乏带通滤波,IRS无法滤除来自其他频段的非目标信号,这是一个关键问题。在空间上有多个无线运营商在附近的区域,即使它们使用不同的频段,这也可能引起不必要的反射,从而降低它们的通信性能。为了应对这一挑战,我们之前提出了一种解决方案,该解决方案依赖于将IRS划分为子表面(sub_IRS),并动态地为这些sub_IRS分配算子。结果表明,与随机分配相比,适当地分配无线运营商到子irs可以提高整体性能。在本文中,我们介绍了一种宽带方法,证明可以通过使用宽带信道来减轻不必要反射的影响,因为跨子载波的平均信噪比(SNR)受到的不利影响较小。这种方法利用频率分集来降低信噪比方差,因为一些子载波可能会受到负面影响,而另一些子载波则会受益,从而在irs诱导的不必要反射存在的情况下保持更一致和更稳健的系统性能。模拟和实际测量证实,宽带IRS的部署为下一代IRS辅助网络中对抗运营商间反射提供了一种强大的策略。此外,在未来的多运营商网络中,宽带方法不需要额外的集中资源控制。仿真结果表明,考虑共存时,1.28 GHz信道的信噪比方差比10 MHz信道的信噪比方差低约20 dB。同样,测量结果证实,从窄带(10 MHz)过渡到宽带(320 MHz)传输时,信噪比变化降低了三倍。总的来说,在这种情况下使用宽带信道可以使系统更加稳定和可预测。
{"title":"Operator coexistence in IRS-assisted mmWave networks: A wideband approach","authors":"Joana Angjo,&nbsp;Anatolij Zubow,&nbsp;Falko Dressler","doi":"10.1016/j.comcom.2025.108341","DOIUrl":"10.1016/j.comcom.2025.108341","url":null,"abstract":"<div><div>In sixth generation (6G) mobile networks, the push towards high-frequency bands for ultra-fast data rates intensifies the challenges of signal attenuation and reduced coverage range. Intelligent reconfigurable surfaces (IRSs) present a promising solution to these challenges by enhancing signal coverage and directing reflections, which also contribute to minimize loss. However, there are multiple challenges associated with IRS, which have to be addressed before full incorporation of this technology into existing networks. A key issue arises from the inability of IRS to filter out non-target signals from other frequency bands due to lack of bandpass filtering. In areas where multiple wireless operators are spatially nearby, even if they use different frequency bands, this may cause unwanted reflections that may degrade their communication performances. To address this challenge, we previously proposed a solution, which relied on partitioning an IRS into sub-surfaces (<span>sub_IRS</span>) and dynamically assigning operators to these <span>sub_IRS</span>. Results have shown that a proper assignment of wireless operators to <span>sub_IRS</span> can improve the overall performance compared to a random assignment. In this paper, we introduce a wideband approach, demonstrating that the impact from unwanted reflections can be mitigated by using wideband channels, as the average signal to noise ratio (SNR) across subcarriers is less adversely affected. This approach leverages frequency diversity to reduce SNR variance, as some of the subcarriers may be negatively affected while others benefit, resulting in maintaining a more consistent and robust system performance in the presence of IRS-induced unwanted reflections. Simulations and real-world measurements confirm that the deployment of wideband IRS provides a robust strategy for combating inter-operator reflections in next generation IRS-assisted networks. Additionally, the wideband approach comes at no additional necessity for centralized resource control in future multi-operator networks. According to simulations, the SNR variance for a 1.28<!--> <!-->GHz channel is approximately 20 dB lower than that of a 10<!--> <!-->MHz channel when coexistence is considered. Similarly, measurements confirm a threefold reduction in SNR variation when transitioning from narrowband (10<!--> <!-->MHz) to wideband (320<!--> <!-->MHz) transmission. In overall, the usage of wideband channels in this context allows the system to be more stable and predictable.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108341"},"PeriodicalIF":4.3,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clustered federated learning with heterogeneous differential privacy on Non-IID data 非iid数据异构差分隐私的聚类联邦学习
IF 4.3 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-20 DOI: 10.1016/j.comcom.2025.108339
Ping Guo , Cheng Bai , Mingxing Zhang , Puwadol Oak Dusadeerungsikul
Federated Learning (FL) has emerged as a promising technology that has garnered significant attention in the Internet of Things (IoT) domain. However, the non-independent and identically distributed (Non-IID) nature of IoT data, coupled with the vulnerability of gradient transmission in traditional federated learning frameworks, limits its broader applicability. Heterogeneous differential privacy offers tailored privacy protection for individual clients, making it particularly well-suited for the diverse functional requirements of IoT devices. This study proposes a clustered federated learning method with heterogeneous differential privacy (FedCDP) to balance model utility and privacy preservation on Non-IID data. Specifically, we employed a two-stage clustering technique to enhance clustering accuracy amidst noise perturbations, and implement a client verification procedure to mitigate the detrimental effects of erroneous clustering and malicious data injection. To solve the problem of noise accumulation in cluster models, we introduced an intra-cluster privacy budget weighting mechanism, and used model shuffling to prevent the server from obtaining the cluster identity corresponding to the local model. We conducted experimental evaluations under multiple data distribution scenarios, and these experimental results show that our method effectively improves robustness to noise and significantly improves model performance compared to the baseline methods. In addition, we perform ablation experiments on each module to further analyze the impact of each module on the method. These findings underscore the usability and robustness of the proposed method.
联邦学习(FL)作为一项有前途的技术在物联网(IoT)领域引起了极大的关注。然而,物联网数据的非独立和同分布(Non-IID)性质,加上传统联邦学习框架中梯度传输的脆弱性,限制了其更广泛的适用性。异构差分隐私为个人客户提供量身定制的隐私保护,特别适合物联网设备的多样化功能需求。本文提出了一种基于异构差分隐私(FedCDP)的聚类联邦学习方法,以平衡非iid数据的模型效用和隐私保护。具体来说,我们采用了一种两阶段聚类技术来提高噪声干扰下的聚类精度,并实现了一个客户端验证程序来减轻错误聚类和恶意数据注入的有害影响。为了解决集群模型中的噪声积累问题,引入了集群内隐私预算加权机制,并利用模型变换防止服务器获取与本地模型对应的集群身份。我们在多个数据分布场景下进行了实验评估,实验结果表明,与基线方法相比,我们的方法有效地提高了对噪声的鲁棒性,显著提高了模型性能。此外,我们对每个模块进行烧蚀实验,进一步分析各个模块对方法的影响。这些发现强调了所提出方法的可用性和鲁棒性。
{"title":"Clustered federated learning with heterogeneous differential privacy on Non-IID data","authors":"Ping Guo ,&nbsp;Cheng Bai ,&nbsp;Mingxing Zhang ,&nbsp;Puwadol Oak Dusadeerungsikul","doi":"10.1016/j.comcom.2025.108339","DOIUrl":"10.1016/j.comcom.2025.108339","url":null,"abstract":"<div><div>Federated Learning (FL) has emerged as a promising technology that has garnered significant attention in the Internet of Things (IoT) domain. However, the non-independent and identically distributed (Non-IID) nature of IoT data, coupled with the vulnerability of gradient transmission in traditional federated learning frameworks, limits its broader applicability. Heterogeneous differential privacy offers tailored privacy protection for individual clients, making it particularly well-suited for the diverse functional requirements of IoT devices. This study proposes a clustered federated learning method with heterogeneous differential privacy (FedCDP) to balance model utility and privacy preservation on Non-IID data. Specifically, we employed a two-stage clustering technique to enhance clustering accuracy amidst noise perturbations, and implement a client verification procedure to mitigate the detrimental effects of erroneous clustering and malicious data injection. To solve the problem of noise accumulation in cluster models, we introduced an intra-cluster privacy budget weighting mechanism, and used model shuffling to prevent the server from obtaining the cluster identity corresponding to the local model. We conducted experimental evaluations under multiple data distribution scenarios, and these experimental results show that our method effectively improves robustness to noise and significantly improves model performance compared to the baseline methods. In addition, we perform ablation experiments on each module to further analyze the impact of each module on the method. These findings underscore the usability and robustness of the proposed method.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"244 ","pages":"Article 108339"},"PeriodicalIF":4.3,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1