首页 > 最新文献

IEEE Transactions on Network Science and Engineering最新文献

英文 中文
Diffusion Model for Relational Inference in Interacting Systems 相互作用系统中关系推理的扩散模型
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-15 DOI: 10.1109/TNSE.2025.3607563
Shuhan Zheng;Ziqiang Li;Kantaro Fujiwara;Gouhei Tanaka
Dynamic behaviors of complex interacting systems, ubiquitously found in physical, biological, engineering, and social phenomena, are associated with underlying interactions between components of the system. A fundamental challenge in network science is to uncover interaction relationships between network components solely from observational data on their dynamics. Recently, generative models in machine learning, such as the variational autoencoder, have been used to identify the network structure through relational inference in multivariate time series data. However, most existing approaches are based on time series predictions, which are still challenging in the presence of missing data. In this study, we propose a novel approach, Diffusion model for Relational Inference (DiffRI), inspired by a self-supervised method for probabilistic time series imputation. DiffRI learns to infer the existence probability of interactions between network components through conditional diffusion modeling. Numerical experiments on both synthetic and quasi-real datasets show that DiffRI is highly competent with other well-known methods in discovering ground truth interactions. Furthermore, we demonstrate that our imputation-based approach is more tolerant of missing data than prediction-based approaches.
在物理、生物、工程和社会现象中无处不在的复杂相互作用系统的动态行为与系统各组成部分之间的潜在相互作用有关。网络科学的一个基本挑战是仅仅从网络组件的动态观测数据中揭示它们之间的相互作用关系。最近,机器学习中的生成模型,如变分自编码器,已被用于通过多变量时间序列数据的关系推理来识别网络结构。然而,大多数现有的方法都是基于时间序列的预测,这在存在缺失数据的情况下仍然具有挑战性。在这项研究中,我们提出了一种新的方法,扩散模型的关系推理(DiffRI),灵感来自自监督方法的概率时间序列imputation。DiffRI通过条件扩散建模来学习推断网络组件之间相互作用的存在概率。在合成数据集和准真实数据集上的数值实验表明,DiffRI在发现地真相互作用方面与其他已知方法相比具有很强的竞争力。此外,我们证明了基于假设的方法比基于预测的方法更能容忍缺失的数据。
{"title":"Diffusion Model for Relational Inference in Interacting Systems","authors":"Shuhan Zheng;Ziqiang Li;Kantaro Fujiwara;Gouhei Tanaka","doi":"10.1109/TNSE.2025.3607563","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607563","url":null,"abstract":"Dynamic behaviors of complex interacting systems, ubiquitously found in physical, biological, engineering, and social phenomena, are associated with underlying interactions between components of the system. A fundamental challenge in network science is to uncover interaction relationships between network components solely from observational data on their dynamics. Recently, generative models in machine learning, such as the variational autoencoder, have been used to identify the network structure through relational inference in multivariate time series data. However, most existing approaches are based on time series predictions, which are still challenging in the presence of missing data. In this study, we propose a novel approach, <bold>Diff</b>usion model for <bold>R</b>elational <bold>I</b>nference (DiffRI), inspired by a self-supervised method for probabilistic time series imputation. DiffRI learns to infer the existence probability of interactions between network components through conditional diffusion modeling. Numerical experiments on both synthetic and quasi-real datasets show that DiffRI is highly competent with other well-known methods in discovering ground truth interactions. Furthermore, we demonstrate that our imputation-based approach is more tolerant of missing data than prediction-based approaches.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"1990-2003"},"PeriodicalIF":7.9,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11164166","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QPADL: Quadratic Programming for Allocation of Distributed Energy Resources to Minimize Power Loss in Distribution Networks 基于最小功率损耗的配电网分布式能源分配二次规划
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-11 DOI: 10.1109/TNSE.2025.3608895
Hongshen Zhang;Shibo He;Yongtao Zhang;Wenchao Meng
Distributed Energy Resources (DERs) offer significant potential for reducing power losses, improving voltage stability, and enhancing resilience in distribution networks. To effectively address network-specific operational constraints and maximize DER performance, it is crucial to strategically optimize both their siting and sizing. Existing works primarily adopt analytical or search-based approaches for DER placement aimed at minimizing power losses. However, analytical methods, while computationally efficient, frequently yield suboptimal solutions at higher DER penetration levels, whereas search-based methods, despite their robustness, become computationally impractical for large-scale networks due to exponential complexity growth. To overcome the limitations, this paper proposes a novel analytical framework that establishes an exact quadratic relationship between power losses and DER injections, enabling a precise analytical estimation and optimization. The proposed approach explicitly relates nodal power demands to their respective contributions to system line losses, providing detailed theoretical insights into the root causes of power losses. Practically, the proposed framework facilitates real-time, large-scale DER allocation optimization while maintaining high accuracy. Furthermore, our theoretical analysis quantifies the impact of the DER power factor on optimal placement for loss reduction. This insight provides a direct, simplified method for integrating power loss considerations into complex, multi-objective optimization models. We validate our method on 33, 69, 123 and 533-bus distribution networks. It significantly outperforms feature-based analytical approaches and matches or exceeds traditional search-based methods. On the largest 533-bus system, our algorithm completes the allocation in just 0.5 s, confirming its effectiveness and practicality for real-world applications.
分布式能源(DERs)为减少电力损耗、提高电压稳定性和增强配电网的弹性提供了巨大的潜力。为了有效地解决网络特定的操作限制并最大限度地提高DER性能,对其选址和规模进行战略性优化至关重要。现有的工程主要采用基于分析或搜索的方法来放置DER,旨在最大限度地减少功率损失。然而,分析方法虽然计算效率高,但在更高的DER渗透水平下经常产生次优解,而基于搜索的方法尽管具有鲁棒性,但由于指数级的复杂性增长,在计算上变得不切实际。为了克服这些限制,本文提出了一种新的分析框架,该框架在功率损耗和DER注入之间建立了精确的二次关系,从而实现了精确的分析估计和优化。所提出的方法明确地将节点功率需求与其各自对系统线路损耗的贡献联系起来,为功率损耗的根本原因提供了详细的理论见解。在实际应用中,该框架能够在保持高精度的同时实现实时、大规模的DER分配优化。此外,我们的理论分析量化了DER功率因数对降低损耗的最佳放置的影响。这种见解为将功率损耗考虑集成到复杂的多目标优化模型中提供了一种直接、简化的方法。我们在33、69、123和533总线配电网络上验证了我们的方法。它明显优于基于特征的分析方法,匹配或超过传统的基于搜索的方法。在最大的533总线系统中,我们的算法在0.5秒内完成了分配,证实了其在实际应用中的有效性和实用性。
{"title":"QPADL: Quadratic Programming for Allocation of Distributed Energy Resources to Minimize Power Loss in Distribution Networks","authors":"Hongshen Zhang;Shibo He;Yongtao Zhang;Wenchao Meng","doi":"10.1109/TNSE.2025.3608895","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3608895","url":null,"abstract":"Distributed Energy Resources (DERs) offer significant potential for reducing power losses, improving voltage stability, and enhancing resilience in distribution networks. To effectively address network-specific operational constraints and maximize DER performance, it is crucial to strategically optimize both their siting and sizing. Existing works primarily adopt analytical or search-based approaches for DER placement aimed at minimizing power losses. However, analytical methods, while computationally efficient, frequently yield suboptimal solutions at higher DER penetration levels, whereas search-based methods, despite their robustness, become computationally impractical for large-scale networks due to exponential complexity growth. To overcome the limitations, this paper proposes a novel analytical framework that establishes an exact quadratic relationship between power losses and DER injections, enabling a precise analytical estimation and optimization. The proposed approach explicitly relates nodal power demands to their respective contributions to system line losses, providing detailed theoretical insights into the root causes of power losses. Practically, the proposed framework facilitates real-time, large-scale DER allocation optimization while maintaining high accuracy. Furthermore, our theoretical analysis quantifies the impact of the DER power factor on optimal placement for loss reduction. This insight provides a direct, simplified method for integrating power loss considerations into complex, multi-objective optimization models. We validate our method on 33, 69, 123 and 533-bus distribution networks. It significantly outperforms feature-based analytical approaches and matches or exceeds traditional search-based methods. On the largest 533-bus system, our algorithm completes the allocation in just 0.5 s, confirming its effectiveness and practicality for real-world applications.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2038-2052"},"PeriodicalIF":7.9,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum Deep Reinforcement Learning for Digital Twin-Enabled 6G Networks and Semantic Communications: Considerations for Adoption and Security 量子深度强化学习用于数字双机支持的6G网络和语义通信:采用和安全考虑
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-11 DOI: 10.1109/TNSE.2025.3609198
Bhaskara Narottama;Abrar Ul Haq;James Adu Ansere;Nidhi Simmons;Berk Canberk;Simon L. Cotton;Hyundong Shin;Trung Q. Duong
Recently, quantum deep reinforcement learning (Q-DRL) has started to gain attention as a potential approach for tackling complex challenges in wireless communication systems. In particular, Q-DRL, integrating quantum operations into deep learning models, can effectively handle dynamic environments and process large-scale optimizations. As future wireless networks continue to evolve, greater emphasis is being placed on context and meaning rather than raw data. New paradigms, such as semantic communications (SemComs) are essential to effectively convey meaning between transmitters and receivers. By linking SemComs with Q-DRL, future wireless networks will be capable of large-scale extractions and decoding of meaning, thereby minimizing reliance on complete context sharing between communicating parties. Together with SemComs, digital twins (DTs) have been considered as key enablers for future wireless networks. As virtual replicas of physical networks, they serve an important role in network operation, optimization, and control. In this regard, Q-DRL will also be highly beneficial for DTs in enhancing critical functions such as data management and security. This study offers fresh outlooks on how to leverage Q-DRL for SemComs in future wireless networks, augmented by the use of DTs.
最近,量子深度强化学习(Q-DRL)作为解决无线通信系统中复杂挑战的潜在方法开始受到关注。特别是,Q-DRL将量子运算集成到深度学习模型中,可以有效地处理动态环境并进行大规模优化。随着未来无线网络的不断发展,人们越来越重视上下文和意义,而不是原始数据。语义通信(SemComs)等新范式对于有效地在发送者和接收者之间传递意义至关重要。通过将semcom与Q-DRL连接起来,未来的无线网络将能够大规模提取和解码含义,从而最大限度地减少对通信各方之间完整上下文共享的依赖。数字孪生体(DTs)与semcom一起被认为是未来无线网络的关键推动者。它们是物理网络的虚拟复制品,在网络运行、优化和控制中起着重要作用。在这方面,Q-DRL在加强数据管理和安全等关键功能方面也将非常有利于dt。这项研究为如何在未来的无线网络中利用Q-DRL为semcom提供了新的前景,并通过使用dt进行增强。
{"title":"Quantum Deep Reinforcement Learning for Digital Twin-Enabled 6G Networks and Semantic Communications: Considerations for Adoption and Security","authors":"Bhaskara Narottama;Abrar Ul Haq;James Adu Ansere;Nidhi Simmons;Berk Canberk;Simon L. Cotton;Hyundong Shin;Trung Q. Duong","doi":"10.1109/TNSE.2025.3609198","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3609198","url":null,"abstract":"Recently, quantum deep reinforcement learning (Q-DRL) has started to gain attention as a potential approach for tackling complex challenges in wireless communication systems. In particular, Q-DRL, integrating quantum operations into deep learning models, can effectively handle dynamic environments and process large-scale optimizations. As future wireless networks continue to evolve, greater emphasis is being placed on context and meaning rather than raw data. New paradigms, such as semantic communications (SemComs) are essential to effectively convey meaning between transmitters and receivers. By linking SemComs with Q-DRL, future wireless networks will be capable of large-scale extractions and decoding of meaning, thereby minimizing reliance on complete context sharing between communicating parties. Together with SemComs, digital twins (DTs) have been considered as key enablers for future wireless networks. As virtual replicas of physical networks, they serve an important role in network operation, optimization, and control. In this regard, Q-DRL will also be highly beneficial for DTs in enhancing critical functions such as data management and security. This study offers fresh outlooks on how to leverage Q-DRL for SemComs in future wireless networks, augmented by the use of DTs.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2053-2076"},"PeriodicalIF":7.9,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Node Importance Estimation via Multi-View Graph Prompting 基于多视图图提示的节点重要性估计
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-10 DOI: 10.1109/TNSE.2025.3608483
Siqi Ma;Yang Fang;Weidong Xiao;Xiang Zhao
Node importance estimation involves assigning a global importance score to each node in a graph, pivotal to various subsequent tasks, including recommendation, network dismantling, etc. Prior research involves pre-training classification tasks using node labels and structural information, followed by computing node importance scores as a downstream regression task. However, a gap exists caused by the inconsistency between the pre-training and downstream tasks, which tends to exert negative transfer. This paper proposes to narrow down the gap for node importance estimation by implementing a multi-view technique, including node-view for context and graph-view for structure. Specifically, in node-view, we devise soft prompts by encoding node information, which enables the model to capture structural features within a semantic context; afterward, the downstream node regression task is aligned with pre-training by inserting prompt patterns. In graph-view, we introduce virtual nodes, which are learnably inserted based on node importance, to create a prompt graph. High-importance nodes in the original graph are linked to more virtual nodes, improving their embeddings in subsequent propagation steps. Such enhancement increases their importance scores in downstream tasks, improving the model's ability to distinguish significant nodes effectively. Additionally, the prompts from different views are fused through multi-view contrastive learning to further enhance the expressiveness of the node embeddings. We empirically evaluate our model on four public datasets, which are shown to outperform other state-of-the-art alternatives significantly and consistently.
节点重要性估计包括为图中的每个节点分配全局重要性分数,这些节点对各种后续任务至关重要,包括推荐、网络拆除等。先前的研究是使用节点标签和结构信息进行预训练分类任务,然后计算节点重要性分数作为下游回归任务。然而,由于前训练任务与下游任务的不一致,导致了一个缺口,从而容易产生负迁移。本文提出了一种多视图技术,包括上下文的节点视图和结构的图视图,以缩小节点重要性估计的差距。具体而言,在节点视图中,我们通过编码节点信息设计软提示,使模型能够捕获语义上下文中的结构特征;然后,通过插入提示模式将下游节点回归任务与预训练对齐。在图视图中,我们引入虚拟节点,根据节点的重要性可学习地插入虚拟节点,以创建提示图。原始图中的高重要节点被链接到更多的虚拟节点,从而在随后的传播步骤中改进它们的嵌入。这种增强提高了它们在下游任务中的重要性得分,提高了模型有效区分重要节点的能力。此外,通过多视图对比学习融合不同视图的提示,进一步增强节点嵌入的表达能力。我们在四个公共数据集上对我们的模型进行了实证评估,这些数据集的表现明显且一致地优于其他最先进的替代方案。
{"title":"Node Importance Estimation via Multi-View Graph Prompting","authors":"Siqi Ma;Yang Fang;Weidong Xiao;Xiang Zhao","doi":"10.1109/TNSE.2025.3608483","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3608483","url":null,"abstract":"Node importance estimation involves assigning a global importance score to each node in a graph, pivotal to various subsequent tasks, including recommendation, network dismantling, etc. Prior research involves pre-training classification tasks using node labels and structural information, followed by computing node importance scores as a downstream regression task. However, a gap exists caused by the inconsistency between the pre-training and downstream tasks, which tends to exert negative transfer. This paper proposes to narrow down the gap for node importance estimation by implementing a multi-view technique, including node-view for context and graph-view for structure. Specifically, in node-view, we devise soft prompts by encoding node information, which enables the model to capture structural features within a semantic context; afterward, the downstream node regression task is aligned with pre-training by inserting prompt patterns. In graph-view, we introduce virtual nodes, which are learnably inserted based on node importance, to create a prompt graph. High-importance nodes in the original graph are linked to more virtual nodes, improving their embeddings in subsequent propagation steps. Such enhancement increases their importance scores in downstream tasks, improving the model's ability to distinguish significant nodes effectively. Additionally, the prompts from different views are fused through multi-view contrastive learning to further enhance the expressiveness of the node embeddings. We empirically evaluate our model on four public datasets, which are shown to outperform other state-of-the-art alternatives significantly and consistently.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2022-2037"},"PeriodicalIF":7.9,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Inference-Aided Large Language Model Agents in Infinitely Repeated Games: A Dynamic Network View 无限重复博弈中的贝叶斯推理辅助大语言模型智能体:一个动态网络视图
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-09 DOI: 10.1109/TNSE.2025.3607916
Dingwen Pan;Weilong Chen;Jian Shi;Chenye Wu;Dan Wang;Choong Seon Hong;Zhu Han
The rapid expansion of large language models (LLMs) has led to increasingly frequent interactions between LLM agents and human users, motivating new questions about their capacity to form and maintain cooperative relationships. To this end, game theory, as an effective tool in the study of strategic interactions, has gathered attention and has been employed in the research field of LLMs, particularly in exploring their interactions with users. However, most previous studies focused on the performance of LLMs in static games or finitely repeated games, and these studies are relatively stylized and cannot fully capture the complex, evolving nature of User–LLM interactions. In this paper, we modeled User–LLM interactions as a dynamic network of repeated strategic exchanges and proposed an infinitely repeated game framework to analyze the behavioral traits of LLMs in such settings. To enable adaptive decision-making under uncertainty, we further incorporated Bayesian inference using a beta distribution as both the prior and posterior. We conducted a case study over the trending and state-of-the-art LLMs: GPT-3, GPT-4, DeepSeek-V3, Qwen2.5-72B, Qwen2.5-7B and Llama-3-70B. Experimental results demonstrate that LLMs show decent performance in infinitely repeated games, indicating their capability in decision-making and cooperation during repeated interactions within dynamic networks. The integration of Bayesian inference further reveals that LLMs can effectively process probabilistic information, leading to improved performance. Our findings suggest that LLM agents prefer to consider future payoffs rather than only caring about single-stage rewards, as well as the ability to build and maintain long-term cooperative relationships with users in dynamic network settings.
大型语言模型(LLM)的快速扩展导致LLM代理和人类用户之间的交互越来越频繁,引发了关于它们形成和维持合作关系的能力的新问题。为此,博弈论作为研究战略互动的有效工具,已受到关注,并被应用于法学硕士的研究领域,特别是在探索法学硕士与用户的互动方面。然而,大多数先前的研究集中于llm在静态游戏或有限重复游戏中的表现,这些研究相对来说是程式化的,不能完全捕捉到User-LLM交互的复杂、不断发展的本质。在本文中,我们将User-LLM交互建模为一个重复战略交换的动态网络,并提出了一个无限重复的博弈框架来分析这种设置下llm的行为特征。为了实现不确定性下的自适应决策,我们进一步结合贝叶斯推理,使用beta分布作为先验和后验。我们对趋势和最先进的llm进行了案例研究:GPT-3, GPT-4, DeepSeek-V3, Qwen2.5-72B, Qwen2.5-7B和Llama-3-70B。实验结果表明,llm在无限重复博弈中表现出良好的性能,表明llm在动态网络中的重复交互中具有良好的决策和合作能力。贝叶斯推理的集成进一步揭示了llm可以有效地处理概率信息,从而提高了性能。我们的研究结果表明,LLM代理更愿意考虑未来的回报,而不是只关心单阶段的回报,以及在动态网络环境中与用户建立和维持长期合作关系的能力。
{"title":"Bayesian Inference-Aided Large Language Model Agents in Infinitely Repeated Games: A Dynamic Network View","authors":"Dingwen Pan;Weilong Chen;Jian Shi;Chenye Wu;Dan Wang;Choong Seon Hong;Zhu Han","doi":"10.1109/TNSE.2025.3607916","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607916","url":null,"abstract":"The rapid expansion of large language models (LLMs) has led to increasingly frequent interactions between LLM agents and human users, motivating new questions about their capacity to form and maintain cooperative relationships. To this end, game theory, as an effective tool in the study of strategic interactions, has gathered attention and has been employed in the research field of LLMs, particularly in exploring their interactions with users. However, most previous studies focused on the performance of LLMs in static games or finitely repeated games, and these studies are relatively stylized and cannot fully capture the complex, evolving nature of User–LLM interactions. In this paper, we modeled User–LLM interactions as a dynamic network of repeated strategic exchanges and proposed an infinitely repeated game framework to analyze the behavioral traits of LLMs in such settings. To enable adaptive decision-making under uncertainty, we further incorporated Bayesian inference using a beta distribution as both the prior and posterior. We conducted a case study over the trending and state-of-the-art LLMs: GPT-3, GPT-4, DeepSeek-V3, Qwen2.5-72B, Qwen2.5-7B and Llama-3-70B. Experimental results demonstrate that LLMs show decent performance in infinitely repeated games, indicating their capability in decision-making and cooperation during repeated interactions within dynamic networks. The integration of Bayesian inference further reveals that LLMs can effectively process probabilistic information, leading to improved performance. Our findings suggest that LLM agents prefer to consider future payoffs rather than only caring about single-stage rewards, as well as the ability to build and maintain long-term cooperative relationships with users in dynamic network settings.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"2004-2021"},"PeriodicalIF":7.9,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel Gradient Computation and Synchronization: Enhancing the Efficiency of Distributed Training for LLMs 并行梯度计算与同步:提高llm分布式训练的效率
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-08 DOI: 10.1109/TNSE.2025.3607331
Hao Li;Hao Jiang;Jing Wu;Guiao Yang;Jian Zhang
As the size of large language models (LLMs) increases, the limitations of a single data center, such as constrained computational resources and storage capacity, have made distributed training across multiple data centers the preferred solution. However, a primary challenge in this context is reducing the impact of gradient synchronization on the training efficiency across multiple data centers. In this work, we propose a distributed training scheme for LLMs, named parallel gradient computation and synchronization (PGCS). Specifically, while one expert model is being trained to compute gradients, another expert model performs gradient synchronization in parallel. In addition, a gradient synchronization algorithm named BLP is developed to find the optimal gradient synchronization strategy under arbitrary network connectivity and limited bandwidth across multiple data centers. Ultimately, the effectiveness of PGCS and BLP in enhancing the efficiency of distributed training is demonstrated through comprehensive simulations and physical experiments.
随着大型语言模型(llm)规模的增加,单个数据中心的局限性(例如受约束的计算资源和存储容量)使得跨多个数据中心的分布式训练成为首选解决方案。然而,在这种情况下的主要挑战是减少梯度同步对跨多个数据中心的训练效率的影响。在这项工作中,我们提出了一种分布式的llm训练方案,称为并行梯度计算和同步(PGCS)。具体来说,当一个专家模型被训练来计算梯度时,另一个专家模型并行地执行梯度同步。此外,提出了一种梯度同步算法BLP,用于寻找任意网络连接和有限带宽下跨多个数据中心的最优梯度同步策略。最后,通过综合仿真和物理实验,验证了PGCS和BLP在提高分布式训练效率方面的有效性。
{"title":"Parallel Gradient Computation and Synchronization: Enhancing the Efficiency of Distributed Training for LLMs","authors":"Hao Li;Hao Jiang;Jing Wu;Guiao Yang;Jian Zhang","doi":"10.1109/TNSE.2025.3607331","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607331","url":null,"abstract":"As the size of large language models (LLMs) increases, the limitations of a single data center, such as constrained computational resources and storage capacity, have made distributed training across multiple data centers the preferred solution. However, a primary challenge in this context is reducing the impact of gradient synchronization on the training efficiency across multiple data centers. In this work, we propose a distributed training scheme for LLMs, named parallel gradient computation and synchronization (PGCS). Specifically, while one expert model is being trained to compute gradients, another expert model performs gradient synchronization in parallel. In addition, a gradient synchronization algorithm named BLP is developed to find the optimal gradient synchronization strategy under arbitrary network connectivity and limited bandwidth across multiple data centers. Ultimately, the effectiveness of PGCS and BLP in enhancing the efficiency of distributed training is demonstrated through comprehensive simulations and physical experiments.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"1962-1976"},"PeriodicalIF":7.9,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-Trust Enabled Anonymous Continuous Cross-Domain Authentication for UAVs: A Blockchain-Based Approach 无人机零信任匿名连续跨域认证:基于区块链的方法
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-09-08 DOI: 10.1109/TNSE.2025.3607494
Xinchao Wang;Wei Wang;Cheng Huang;Ping Cao
The open and zero-trust nature of the heterogeneous low-altitude intelligence network requires more stringent secure authentication that cannot be meet with conventional schemes, due to the static authorization misalignment, long-validity token infiltration risk, and single-factor credential ossification. To address these challenges, this study proposes a blockchain-based cross-domain authentication scheme. We first develop a blockchain-enabled secure cross-domain registration and information management architecture incorporating a dual-index data structure for efficient historical query operations. Unmanned aerial vehicles (UAVs) achieve cross-domain registration through blockchain-based secure interactions with target domain trusted authorities (TAs). A cross-domain authentication protocol integrating physical unclonable function (PUF) and hash-based signature technique is designed, for mutual authentication. The TA generates time-limited cross-domain tokens with restricted communication attempts for UAVs, which subsequently establish negotiated session keys with base stations for secure resource sharing. To enhance security dynamics, both parties update temporary identity information and prepare fresh authentication keys during each token request cycle. The TA delegates token-updating random factors to base stations to ensure secure token renewal. Additionally, as the blockchain records the hash values of each token round, TA can detect if internal attackers have tampered with the token state. The security analysis and experiments demonstrate the advantages of our scheme.
异构低空智能网络的开放性和零信任特性要求更严格的安全认证,这是传统方案无法满足的,存在静态授权错位、长有效令牌渗透风险、单因素凭据骨化等问题。为了应对这些挑战,本研究提出了一种基于区块链的跨域认证方案。我们首先开发了一个支持区块链的安全跨域注册和信息管理架构,该架构包含双索引数据结构,用于高效的历史查询操作。无人机(uav)通过与目标域可信机构(ta)进行基于区块链的安全交互来实现跨域注册。结合物理不可克隆功能(PUF)和基于哈希的签名技术,设计了一种跨域认证协议,实现相互认证。TA为无人机生成具有限制通信尝试的限时跨域令牌,随后与基站建立协商会话密钥,用于安全资源共享。为了增强安全性动态,双方在每个令牌请求周期中更新临时身份信息并准备新的身份验证密钥。TA将令牌更新随机因素委托给基站,以确保令牌更新的安全性。此外,由于区块链记录了每个令牌轮的哈希值,因此TA可以检测到内部攻击者是否篡改了令牌状态。安全性分析和实验证明了该方案的优越性。
{"title":"Zero-Trust Enabled Anonymous Continuous Cross-Domain Authentication for UAVs: A Blockchain-Based Approach","authors":"Xinchao Wang;Wei Wang;Cheng Huang;Ping Cao","doi":"10.1109/TNSE.2025.3607494","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3607494","url":null,"abstract":"The open and zero-trust nature of the heterogeneous low-altitude intelligence network requires more stringent secure authentication that cannot be meet with conventional schemes, due to the static authorization misalignment, long-validity token infiltration risk, and single-factor credential ossification. To address these challenges, this study proposes a blockchain-based cross-domain authentication scheme. We first develop a blockchain-enabled secure cross-domain registration and information management architecture incorporating a dual-index data structure for efficient historical query operations. Unmanned aerial vehicles (UAVs) achieve cross-domain registration through blockchain-based secure interactions with target domain trusted authorities (TAs). A cross-domain authentication protocol integrating physical unclonable function (PUF) and hash-based signature technique is designed, for mutual authentication. The TA generates time-limited cross-domain tokens with restricted communication attempts for UAVs, which subsequently establish negotiated session keys with base stations for secure resource sharing. To enhance security dynamics, both parties update temporary identity information and prepare fresh authentication keys during each token request cycle. The TA delegates token-updating random factors to base stations to ensure secure token renewal. Additionally, as the blockchain records the hash values of each token round, TA can detect if internal attackers have tampered with the token state. The security analysis and experiments demonstrate the advantages of our scheme.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"1977-1989"},"PeriodicalIF":7.9,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Energy Consumption and Coverage in Underwater Magnetic Induction-Assisted Acoustic WSNs Using Learning Automata-Based Cooperative MIMO Formation 基于学习自动机的协同MIMO编队优化水下磁感应声传感器网络的能耗和覆盖
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-08-22 DOI: 10.1109/TNSE.2025.3561751
Qingyan Ren;Yanjing Sun;Sizhen Bian;Michele Magno
Underwater Wireless Sensor Networks (UWSNs) offer promising exploration capabilities in challenging underwater environments, necessitating a focus on reducing energy consumption while guaranteeing monitoring coverage. Underwater magnetic induction (MI)-assisted acoustic cooperative multiple-input–multiple-output (MIMO) WSNs have shown advantages over traditional UWSNs in various aspects due to the seamless integration of sensor networks and communication technology. However, as an emerging topic, a critical gap exists, as they often overlook the vital considerations of monitoring coverage requirements and the dynamic nature of the unknown underwater environment. Moreover, these advantages can be further enhanced by harnessing the collaborative potential of multiple independent underwater nodes. This paper introduces a significant advancement to the field of MI-assisted Acoustic Cooperative MIMO WSNs leveraging the innovative Confident Information Coverage (CIC) and a reinforcement learning paradigm known as Learning Automata (LA). The paper presents the LA-based Cooperative MIMO Formation (LACMF) algorithm designed to minimize communication energy consumption in sensors while concurrently maximizing coverage performance. Experimental results demonstrate the LACMF considerably outperforms other schemes in terms of energy consumption, and network coverage to satisfy the imposed constraints, the CIC can be improved up to by an additional 52%, 11% reduction in energy consumption.
水下无线传感器网络(uwsn)在具有挑战性的水下环境中提供了有前途的勘探能力,因此需要关注在保证监测覆盖的同时降低能耗。水下磁感应(MI)辅助声协同多输入多输出(MIMO)无线传感器网络由于传感器网络与通信技术的无缝集成,在许多方面都显示出传统UWSNs的优势。然而,作为一个新兴的课题,它们往往忽略了监测覆盖要求和未知水下环境的动态性等重要考虑因素,存在着严重的差距。此外,通过利用多个独立水下节点的协作潜力,可以进一步增强这些优势。本文介绍了利用创新的自信信息覆盖(CIC)和被称为学习自动机(LA)的强化学习范式在mi辅助声学合作MIMO WSNs领域的重大进展。本文提出了一种基于la的协同MIMO编队(LACMF)算法,该算法的设计目的是使传感器的通信能耗最小化,同时使覆盖性能最大化。实验结果表明,LACMF在能量消耗和网络覆盖方面明显优于其他方案,在满足所施加的约束条件下,CIC可以额外提高52%,降低11%的能量消耗。
{"title":"Optimizing Energy Consumption and Coverage in Underwater Magnetic Induction-Assisted Acoustic WSNs Using Learning Automata-Based Cooperative MIMO Formation","authors":"Qingyan Ren;Yanjing Sun;Sizhen Bian;Michele Magno","doi":"10.1109/TNSE.2025.3561751","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3561751","url":null,"abstract":"Underwater Wireless Sensor Networks (UWSNs) offer promising exploration capabilities in challenging underwater environments, necessitating a focus on reducing energy consumption while guaranteeing monitoring coverage. Underwater magnetic induction (MI)-assisted acoustic cooperative multiple-input–multiple-output (MIMO) WSNs have shown advantages over traditional UWSNs in various aspects due to the seamless integration of sensor networks and communication technology. However, as an emerging topic, a critical gap exists, as they often overlook the vital considerations of monitoring coverage requirements and the dynamic nature of the unknown underwater environment. Moreover, these advantages can be further enhanced by harnessing the collaborative potential of multiple independent underwater nodes. This paper introduces a significant advancement to the field of MI-assisted Acoustic Cooperative MIMO WSNs leveraging the innovative Confident Information Coverage (CIC) and a reinforcement learning paradigm known as Learning Automata (LA). The paper presents the LA-based Cooperative MIMO Formation (LACMF) algorithm designed to minimize communication energy consumption in sensors while concurrently maximizing coverage performance. Experimental results demonstrate the LACMF considerably outperforms other schemes in terms of energy consumption, and network coverage to satisfy the imposed constraints, the CIC can be improved up to by an additional 52%, 11% reduction in energy consumption.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 5","pages":"3527-3540"},"PeriodicalIF":7.9,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Load Balancing for Industrial Edge Computing Systems: An AxTD3-Deep Reinforcement Learning Approach 工业边缘计算系统的自适应负载平衡:一种axtd3深度强化学习方法
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-08-20 DOI: 10.1109/TNSE.2025.3600924
Fenghui Zhang;Yuhang Jiang;Xuecai Bao;Xiancun Zhou;Yu Zong;Xiaohu Liang;Kun Yang
Introducing edge computing into smart manufacturing can enhance factory efficiency and productivity. By leveraging a central scheduler to connect Edge Servers (ESs) in these factories, resource sharing can be achieved. However, the unpredictable nature of task offloading from factory IoT devices results in varying task loads at each ES, expanding the action space and complicating task scheduling coordination, thus impeding effective load balancing. To address this challenge, we propose an AxTD3-Deep Reinforcement Learning (DRL) method to balance the system while reducing system latency. Firstly, we consider that each ES has multiple virtual machines and propose a workload balancing algorithm to ensure more balanced computation among the virtual machines of each ES. Next, we construct this system as a reinforcement learning model and analyze its state and action spaces. Based on this analysis, we modify the system's states and actions to reduce its complexity without compromising utility. We then design the AxTD3-DRL to balance the system, i.e., A2TD3 and A3TD3, dividing a neural network into several parallel sub-networks to further reduce the action space and state space, thereby accelerating convergence. Finally, we compare the designed method with classic DRL algorithms (e.g., SAC, TD3) and heuristic approaches (e.g., PSO). The results show that our proposed AxTD3 algorithm not only balances the load across ESs but also reduces the average system latency.
将边缘计算引入智能制造可以提高工厂效率和生产力。通过利用中央调度器连接这些工厂中的边缘服务器,可以实现资源共享。然而,工厂物联网设备任务卸载的不可预测性导致每个ES的任务负载不同,扩展了操作空间并使任务调度协调复杂化,从而阻碍了有效的负载均衡。为了解决这一挑战,我们提出了一种axtd3 -深度强化学习(DRL)方法来平衡系统,同时减少系统延迟。首先,我们考虑到每个ES都有多个虚拟机,并提出了一种工作负载平衡算法,以确保每个ES的虚拟机之间的计算更加均衡。接下来,我们将该系统构建为一个强化学习模型,并分析其状态和动作空间。基于这一分析,我们修改系统的状态和动作,在不影响效用的情况下降低其复杂性。然后,我们设计了AxTD3-DRL来平衡系统,即A2TD3和A3TD3,将神经网络划分为多个并行子网络,进一步减小动作空间和状态空间,从而加快收敛速度。最后,我们将设计的方法与经典的DRL算法(如SAC、TD3)和启发式方法(如PSO)进行了比较。结果表明,我们提出的AxTD3算法不仅平衡了ESs之间的负载,而且降低了系统的平均延迟。
{"title":"Adaptive Load Balancing for Industrial Edge Computing Systems: An AxTD3-Deep Reinforcement Learning Approach","authors":"Fenghui Zhang;Yuhang Jiang;Xuecai Bao;Xiancun Zhou;Yu Zong;Xiaohu Liang;Kun Yang","doi":"10.1109/TNSE.2025.3600924","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3600924","url":null,"abstract":"Introducing edge computing into smart manufacturing can enhance factory efficiency and productivity. By leveraging a central scheduler to connect Edge Servers (ESs) in these factories, resource sharing can be achieved. However, the unpredictable nature of task offloading from factory IoT devices results in varying task loads at each ES, expanding the action space and complicating task scheduling coordination, thus impeding effective load balancing. To address this challenge, we propose an AxTD3-Deep Reinforcement Learning (DRL) method to balance the system while reducing system latency. Firstly, we consider that each ES has multiple virtual machines and propose a workload balancing algorithm to ensure more balanced computation among the virtual machines of each ES. Next, we construct this system as a reinforcement learning model and analyze its state and action spaces. Based on this analysis, we modify the system's states and actions to reduce its complexity without compromising utility. We then design the AxTD3-DRL to balance the system, i.e., A2TD3 and A3TD3, dividing a neural network into several parallel sub-networks to further reduce the action space and state space, thereby accelerating convergence. Finally, we compare the designed method with classic DRL algorithms (e.g., SAC, TD3) and heuristic approaches (e.g., PSO). The results show that our proposed AxTD3 algorithm not only balances the load across ESs but also reduces the average system latency.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"4743-4759"},"PeriodicalIF":7.9,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Asymptotic Convergence of Subgraph Generated Models 子图生成模型的渐近收敛性
IF 7.9 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2025-08-19 DOI: 10.1109/TNSE.2025.3598991
Xinchen Xu;Francesca Parise
We study a family of random graph models - termed subgraph generated models (SUGMs) - initially developed by Chandrasekhar and Jackson (2025) in which higher-order structures are explicitly included in the network formation process. We use matrix concentration inequalities to bound the difference between the adjacency matrix of networks realized from such SUGMs and the expected adjacency matrix as a function of the network size. We apply this result to derive high-probability bounds on the difference between centrality measures (such as degree, eigenvector, and Katz centrality) in sampled versus expected normalized networks.
我们研究了一组随机图模型-称为子图生成模型(sugm) -最初由Chandrasekhar和Jackson(2025)开发,其中高阶结构明确包含在网络形成过程中。我们使用矩阵集中不等式将由此类sugm实现的网络邻接矩阵与期望邻接矩阵之间的差作为网络大小的函数进行绑定。我们应用这一结果来推导抽样与期望归一化网络中中心性度量(如度、特征向量和Katz中心性)之间差异的高概率界限。
{"title":"On the Asymptotic Convergence of Subgraph Generated Models","authors":"Xinchen Xu;Francesca Parise","doi":"10.1109/TNSE.2025.3598991","DOIUrl":"https://doi.org/10.1109/TNSE.2025.3598991","url":null,"abstract":"We study a family of random graph models - termed subgraph generated models (SUGMs) - initially developed by Chandrasekhar and Jackson (2025) in which higher-order structures are explicitly included in the network formation process. We use matrix concentration inequalities to bound the difference between the adjacency matrix of networks realized from such SUGMs and the expected adjacency matrix as a function of the network size. We apply this result to derive high-probability bounds on the difference between centrality measures (such as degree, eigenvector, and Katz centrality) in sampled versus expected normalized networks.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"13 ","pages":"5654-5662"},"PeriodicalIF":7.9,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Network Science and Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1