首页 > 最新文献

IEEE Networking Letters最新文献

英文 中文
Editorial SI on Advances in AI for 6G Networks 人工智能在6G网络中的进展
Pub Date : 2025-02-11 DOI: 10.1109/LNET.2024.3519937
Hatim Chergui;Kamel Tourki;Jun Wu
The advent of 6G networks heralds a new era of telecommunications characterized by unparalleled connectivity, ultra-low latency, and immersive applications such as holographic communication and Industry 5.0. However, these advancements also introduce significant complexities in network management and service orchestration. This Special Issue of IEEE Networking Letters explores cutting-edge research on Artificial Intelligence (AI)-driven automation techniques designed to address these challenges. The selected works span a diverse array of AI paradigms—ranging from generative AI (GenAI) and reinforcement learning to multi-agent systems and federated learning—showcasing their applications across various 6G technological domains. By highlighting these innovations, this issue aims to provide valuable insights into the pivotal role of AI in shaping the future of 6G networks.
6G网络的出现预示着电信新时代的到来,其特点是无与伦比的连接性,超低延迟以及全息通信和工业5.0等沉浸式应用。然而,这些进步也在网络管理和服务编排方面引入了显著的复杂性。本期IEEE网络通讯特刊探讨了人工智能(AI)驱动的自动化技术的前沿研究,旨在解决这些挑战。所选作品涵盖了各种人工智能范式,从生成人工智能(GenAI)和强化学习到多智能体系统和联邦学习,展示了它们在各种6G技术领域的应用。通过强调这些创新,本期旨在就人工智能在塑造6G网络未来中的关键作用提供有价值的见解。
{"title":"Editorial SI on Advances in AI for 6G Networks","authors":"Hatim Chergui;Kamel Tourki;Jun Wu","doi":"10.1109/LNET.2024.3519937","DOIUrl":"https://doi.org/10.1109/LNET.2024.3519937","url":null,"abstract":"The advent of 6G networks heralds a new era of telecommunications characterized by unparalleled connectivity, ultra-low latency, and immersive applications such as holographic communication and Industry 5.0. However, these advancements also introduce significant complexities in network management and service orchestration. This Special Issue of IEEE N<sc>etworking</small> L<sc>etters</small> explores cutting-edge research on Artificial Intelligence (AI)-driven automation techniques designed to address these challenges. The selected works span a diverse array of AI paradigms—ranging from generative AI (GenAI) and reinforcement learning to multi-agent systems and federated learning—showcasing their applications across various 6G technological domains. By highlighting these innovations, this issue aims to provide valuable insights into the pivotal role of AI in shaping the future of 6G networks.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"215-216"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10880116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Networking Letters Author Guidelines IEEE网络通讯作者指南
Pub Date : 2025-02-11 DOI: 10.1109/LNET.2025.3526350
{"title":"IEEE Networking Letters Author Guidelines","authors":"","doi":"10.1109/LNET.2025.3526350","DOIUrl":"https://doi.org/10.1109/LNET.2025.3526350","url":null,"abstract":"","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"276-277"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879520","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Networking Letters Society Information IEEE网络通讯协会信息
Pub Date : 2025-02-11 DOI: 10.1109/LNET.2025.3526352
{"title":"IEEE Networking Letters Society Information","authors":"","doi":"10.1109/LNET.2025.3526352","DOIUrl":"https://doi.org/10.1109/LNET.2025.3526352","url":null,"abstract":"","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879519","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Networking Letters Publication Information IEEE网络通讯出版信息
Pub Date : 2025-02-11 DOI: 10.1109/LNET.2025.3526348
{"title":"IEEE Networking Letters Publication Information","authors":"","doi":"10.1109/LNET.2025.3526348","DOIUrl":"https://doi.org/10.1109/LNET.2025.3526348","url":null,"abstract":"","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879518","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Framework of Horizontal-Vertical Hybrid Federated Learning for EdgeIoT 一种面向边缘物联网的水平-垂直混合联邦学习新框架
Pub Date : 2025-02-10 DOI: 10.1109/LNET.2025.3540268
Kai Li;Yilei Liang;Xin Yuan;Wei Ni;Jon Crowcroft;Chau Yuen;Ozgur B. Akan
This letter puts forth a new hybrid horizontal-vertical federated learning (HoVeFL) for mobile edge computing-enabled Internet of Things (EdgeIoT). In this framework, certain EdgeIoT devices train local models using the same data samples but analyze disparate data features, while the others focus on the same features using non-independent and identically distributed (non-IID) data samples. Thus, even though the data features are consistent, the data samples vary across devices. The proposed HoVeFL formulates the training of local and global models to minimize the global loss function. Performance evaluations on CIFAR-10 and SVHN datasets reveal that the testing loss of HoVeFL with 12 horizontal FL devices and six vertical FL devices is 5.5% and 25.2% higher, respectively, compared to a setup with six horizontal FL devices and 12 vertical FL devices.
这封信提出了一种新的混合水平-垂直联合学习(HoVeFL),用于支持移动边缘计算的物联网(EdgeIoT)。在这个框架中,某些EdgeIoT设备使用相同的数据样本训练本地模型,但分析不同的数据特征,而其他设备则使用非独立和同分布(non-IID)数据样本关注相同的特征。因此,即使数据特征是一致的,数据样本也会因设备而异。提出的HoVeFL制定了局部和全局模型的训练,以最小化全局损失函数。在CIFAR-10和SVHN数据集上的性能评估表明,与具有6个水平FL装置和12个垂直FL装置的设置相比,具有12个水平FL装置和6个垂直FL装置的HoVeFL测试损耗分别高出5.5%和25.2%。
{"title":"A Novel Framework of Horizontal-Vertical Hybrid Federated Learning for EdgeIoT","authors":"Kai Li;Yilei Liang;Xin Yuan;Wei Ni;Jon Crowcroft;Chau Yuen;Ozgur B. Akan","doi":"10.1109/LNET.2025.3540268","DOIUrl":"https://doi.org/10.1109/LNET.2025.3540268","url":null,"abstract":"This letter puts forth a new hybrid horizontal-vertical federated learning (HoVeFL) for mobile edge computing-enabled Internet of Things (EdgeIoT). In this framework, certain EdgeIoT devices train local models using the same data samples but analyze disparate data features, while the others focus on the same features using non-independent and identically distributed (non-IID) data samples. Thus, even though the data features are consistent, the data samples vary across devices. The proposed HoVeFL formulates the training of local and global models to minimize the global loss function. Performance evaluations on CIFAR-10 and SVHN datasets reveal that the testing loss of HoVeFL with 12 horizontal FL devices and six vertical FL devices is 5.5% and 25.2% higher, respectively, compared to a setup with six horizontal FL devices and 12 vertical FL devices.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 2","pages":"83-87"},"PeriodicalIF":0.0,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIRA: A Method of Federated Multi-Task Learning for Large Language Models 面向大型语言模型的联合多任务学习方法
Pub Date : 2025-02-07 DOI: 10.1109/LNET.2025.3539810
Ahmed Elbakary;Chaouki Ben Issaid;Tamer ElBatt;Karim Seddik;Mehdi Bennis
In this letter, we introduce a method for fine-tuning Large Language Models (LLMs), inspired by Multi-Task learning in a federated manner. Our approach leverages the structure of each client’s model and enables a learning scheme that considers other clients’ tasks and data distribution. To mitigate the extensive computational and communication overhead often associated with LLMs, we utilize a parameter-efficient fine-tuning method, specifically Low-Rank Adaptation (LoRA), to reduce the number of trainable parameters. Experimental results, with different datasets and models, demonstrate the proposed method’s effectiveness compared to existing frameworks for federated fine-tuning of LLMs in terms of global and local performances. The proposed scheme outperforms existing baselines by achieving lower local loss for each client, while maintaining comparable global performance.
在这封信中,我们介绍了一种微调大型语言模型(llm)的方法,其灵感来自于以联邦方式进行多任务学习。我们的方法利用每个客户模型的结构,并使学习方案考虑到其他客户的任务和数据分布。为了减轻llm通常带来的大量计算和通信开销,我们利用参数高效的微调方法,特别是低秩自适应(LoRA),来减少可训练参数的数量。不同数据集和模型的实验结果表明,与现有的llm联合微调框架相比,该方法在全局和局部性能方面都是有效的。所提出的方案优于现有的基线,实现了每个客户端的较低本地损失,同时保持了可比较的全局性能。
{"title":"MIRA: A Method of Federated Multi-Task Learning for Large Language Models","authors":"Ahmed Elbakary;Chaouki Ben Issaid;Tamer ElBatt;Karim Seddik;Mehdi Bennis","doi":"10.1109/LNET.2025.3539810","DOIUrl":"https://doi.org/10.1109/LNET.2025.3539810","url":null,"abstract":"In this letter, we introduce a method for fine-tuning Large Language Models (LLMs), inspired by Multi-Task learning in a federated manner. Our approach leverages the structure of each client’s model and enables a learning scheme that considers other clients’ tasks and data distribution. To mitigate the extensive computational and communication overhead often associated with LLMs, we utilize a parameter-efficient fine-tuning method, specifically Low-Rank Adaptation (LoRA), to reduce the number of trainable parameters. Experimental results, with different datasets and models, demonstrate the proposed method’s effectiveness compared to existing frameworks for federated fine-tuning of LLMs in terms of global and local performances. The proposed scheme outperforms existing baselines by achieving lower local loss for each client, while maintaining comparable global performance.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"171-175"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10877919","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large Language Model Agents for Radio Map Generation and Wireless Network Planning 用于无线地图生成和无线网络规划的大型语言模型代理
Pub Date : 2025-02-07 DOI: 10.1109/LNET.2025.3539829
Hongye Quan;Wanli Ni;Tong Zhang;Xiangyu Ye;Ziyi Xie;Shuai Wang;Yuanwei Liu;Hui Song
Using commercial software for radio map generation and wireless network planning often require complex manual operations, posing significant challenges in terms of scalability, adaptability, and user-friendliness, due to heavy manual operations. To address these issues, we propose an automated solution that employs large language model (LLM) agents. These agents are designed to autonomously generate radio maps and facilitate wireless network planning for specified areas, thereby minimizing the necessity for extensive manual intervention. To validate the effectiveness of our proposed solution, we develop a software platform that integrates LLM agents. Experimental results demonstrate that a large amount manual operations can be saved via the proposed LLM agent, and the automated solutions can achieve an enhanced coverage and signal-to-interference-noise ratio (SINR), especially in urban environments.
使用商业软件进行无线地图生成和无线网络规划通常需要复杂的手动操作,由于大量的手动操作,在可伸缩性、适应性和用户友好性方面提出了重大挑战。为了解决这些问题,我们提出了一个采用大型语言模型(LLM)代理的自动化解决方案。这些代理被设计为自主生成无线地图,并促进指定区域的无线网络规划,从而最大限度地减少大量人工干预的必要性。为了验证我们提出的解决方案的有效性,我们开发了一个集成LLM代理的软件平台。实验结果表明,所提出的LLM代理可以节省大量的人工操作,并且自动化解决方案可以实现更高的覆盖和信噪比(SINR),特别是在城市环境中。
{"title":"Large Language Model Agents for Radio Map Generation and Wireless Network Planning","authors":"Hongye Quan;Wanli Ni;Tong Zhang;Xiangyu Ye;Ziyi Xie;Shuai Wang;Yuanwei Liu;Hui Song","doi":"10.1109/LNET.2025.3539829","DOIUrl":"https://doi.org/10.1109/LNET.2025.3539829","url":null,"abstract":"Using commercial software for radio map generation and wireless network planning often require complex manual operations, posing significant challenges in terms of scalability, adaptability, and user-friendliness, due to heavy manual operations. To address these issues, we propose an automated solution that employs large language model (LLM) agents. These agents are designed to autonomously generate radio maps and facilitate wireless network planning for specified areas, thereby minimizing the necessity for extensive manual intervention. To validate the effectiveness of our proposed solution, we develop a software platform that integrates LLM agents. Experimental results demonstrate that a large amount manual operations can be saved via the proposed LLM agent, and the automated solutions can achieve an enhanced coverage and signal-to-interference-noise ratio (SINR), especially in urban environments.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"166-170"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AoI-Aware and Privacy Protection Incentive Mechanism for Crowdsensing Networks 众测网络的aoi感知与隐私保护激励机制
Pub Date : 2025-02-03 DOI: 10.1109/LNET.2025.3538172
Xuying Zhou;Jingyi Xu;Wenqian Zhou;Dusit Niyato;Chau Yuen
In Crowdsensing Networks, the freshness of sensing data is critical for accurate analysis and reliable decisions, which is measured by Age of Information (AoI). However, the Sensing Users (SUs) are reluctant to execute frequent sensing without any incentive, since they incur not only the inevitable energy consumption but also the potential privacy leakage. Adopting Differential Privacy (DP) can effectively protect the privacy of SUs, through it reduces the AoI performance. To address this issue, we propose a freshness-aware privacy-preserving incentive mechanism to balance the trade-off between data value and privacy. SUs are classified with different update cycles, while the Sensing Platform (SP) is unknown about the information. Therefore, we design a contract to solve the information asymmetry problem, which is proved to be optimal and truth-telling. Finally, numerical results demonstrate that the proposed contract is feasible and achieves a utility for the SP when compared with other mechanisms.
在众传感网络中,传感数据的新鲜度对准确分析和可靠决策至关重要,这是由信息时代(Age of Information, AoI)衡量的。然而,在没有任何激励的情况下,传感用户不愿意频繁地执行传感,因为这不仅会带来不可避免的能源消耗,而且还会带来潜在的隐私泄露。采用差分隐私(DP)可以通过降低AoI性能来有效地保护单个用户的隐私。为了解决这一问题,我们提出了一种新鲜度感知的隐私保护激励机制,以平衡数据价值和隐私之间的权衡。SUs根据不同的更新周期进行分类,而感知平台(SP)对信息一无所知。因此,我们设计了一个解决信息不对称问题的契约,并证明了该契约是最优的和真实的。最后,数值计算结果表明,该契约是可行的,与其他机制相比,该契约具有一定的效用。
{"title":"AoI-Aware and Privacy Protection Incentive Mechanism for Crowdsensing Networks","authors":"Xuying Zhou;Jingyi Xu;Wenqian Zhou;Dusit Niyato;Chau Yuen","doi":"10.1109/LNET.2025.3538172","DOIUrl":"https://doi.org/10.1109/LNET.2025.3538172","url":null,"abstract":"In Crowdsensing Networks, the freshness of sensing data is critical for accurate analysis and reliable decisions, which is measured by Age of Information (AoI). However, the Sensing Users (SUs) are reluctant to execute frequent sensing without any incentive, since they incur not only the inevitable energy consumption but also the potential privacy leakage. Adopting Differential Privacy (DP) can effectively protect the privacy of SUs, through it reduces the AoI performance. To address this issue, we propose a freshness-aware privacy-preserving incentive mechanism to balance the trade-off between data value and privacy. SUs are classified with different update cycles, while the Sensing Platform (SP) is unknown about the information. Therefore, we design a contract to solve the information asymmetry problem, which is proved to be optimal and truth-telling. Finally, numerical results demonstrate that the proposed contract is feasible and achieves a utility for the SP when compared with other mechanisms.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 2","pages":"98-102"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness-Aware Demodulator Allocation in LoRa Multi-Gateway Networks LoRa多网关网络中的公平感知解调器分配
Pub Date : 2025-01-22 DOI: 10.1109/LNET.2025.3532765
Alexandre Guitton;Megumi Kaneko
The literature has shown the drastic decrease of the achievable LoRa network throughput, due to the limited number of demodulators that are supported by LoRaWAN gateways. Unlike existing approaches, in this letter, we design fairness-aware algorithms under this stringent constraint. By taking the efficient demodulation time ratio as a fairness metric, our algorithms enable to prioritize frames with larger spreading factors, while increasing the total demodulation time thanks to collaboration among gateways. Numerical results demonstrate that our proposed methods largely outperform LoRaWAN baselines, while closely approaching their performance upper bounds.
文献显示,由于LoRaWAN网关支持的解调器数量有限,可实现的LoRa网络吞吐量急剧下降。与现有的方法不同,在这封信中,我们在这个严格的约束下设计了公平感知算法。通过将有效解调时间比作为公平性度量,我们的算法能够优先考虑具有较大扩展因子的帧,同时由于网关之间的协作而增加了总解调时间。数值结果表明,我们提出的方法在很大程度上优于LoRaWAN基线,同时接近其性能上限。
{"title":"Fairness-Aware Demodulator Allocation in LoRa Multi-Gateway Networks","authors":"Alexandre Guitton;Megumi Kaneko","doi":"10.1109/LNET.2025.3532765","DOIUrl":"https://doi.org/10.1109/LNET.2025.3532765","url":null,"abstract":"The literature has shown the drastic decrease of the achievable LoRa network throughput, due to the limited number of demodulators that are supported by LoRaWAN gateways. Unlike existing approaches, in this letter, we design fairness-aware algorithms under this stringent constraint. By taking the efficient demodulation time ratio as a fairness metric, our algorithms enable to prioritize frames with larger spreading factors, while increasing the total demodulation time thanks to collaboration among gateways. Numerical results demonstrate that our proposed methods largely outperform LoRaWAN baselines, while closely approaching their performance upper bounds.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 2","pages":"93-97"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Efficient Split Learning for Fine-Tuning Large Language Models in Edge Networks 边缘网络中大规模语言模型微调的高效分割学习
Pub Date : 2025-01-16 DOI: 10.1109/LNET.2025.3530430
Zuguang Li;Shaohua Wu;Liang Li;Songge Zhang
In this letter, we propose an energy-efficient split learning (SL) framework for fine-tuning large language models (LLMs) using geo-distributed personal data at the network edge, where LLMs are split and alternately across massive mobile devices and an edge server. Considering the device heterogeneity and channel dynamics in edge networks, a Cut lAyer and computing Resource Decision (CARD) algorithm is developed to minimize training delay and energy consumption. Simulation results demonstrate that the proposed approach reduces the average training delay and server’s energy consumption by 70.8% and 53.1%, compared to the benchmarks, respectively.
在这封信中,我们提出了一种节能的分割学习(SL)框架,用于在网络边缘使用地理分布的个人数据来微调大型语言模型(llm),其中llm在大型移动设备和边缘服务器之间进行分割和交替。考虑到边缘网络中设备的异构性和信道的动态性,提出了一种Cut lAyer和计算资源决策(CARD)算法来最小化训练延迟和能量消耗。仿真结果表明,与基准测试相比,该方法的平均训练延迟和服务器能耗分别降低了70.8%和53.1%。
{"title":"Energy-Efficient Split Learning for Fine-Tuning Large Language Models in Edge Networks","authors":"Zuguang Li;Shaohua Wu;Liang Li;Songge Zhang","doi":"10.1109/LNET.2025.3530430","DOIUrl":"https://doi.org/10.1109/LNET.2025.3530430","url":null,"abstract":"In this letter, we propose an energy-efficient split learning (SL) framework for fine-tuning large language models (LLMs) using geo-distributed personal data at the network edge, where LLMs are split and alternately across massive mobile devices and an edge server. Considering the device heterogeneity and channel dynamics in edge networks, a Cut lAyer and computing Resource Decision (CARD) algorithm is developed to minimize training delay and energy consumption. Simulation results demonstrate that the proposed approach reduces the average training delay and server’s energy consumption by 70.8% and 53.1%, compared to the benchmarks, respectively.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"176-180"},"PeriodicalIF":0.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145351874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Networking Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1