Pub Date : 2025-02-11DOI: 10.1109/LNET.2024.3519937
Hatim Chergui;Kamel Tourki;Jun Wu
The advent of 6G networks heralds a new era of telecommunications characterized by unparalleled connectivity, ultra-low latency, and immersive applications such as holographic communication and Industry 5.0. However, these advancements also introduce significant complexities in network management and service orchestration. This Special Issue of IEEE Networking Letters explores cutting-edge research on Artificial Intelligence (AI)-driven automation techniques designed to address these challenges. The selected works span a diverse array of AI paradigms—ranging from generative AI (GenAI) and reinforcement learning to multi-agent systems and federated learning—showcasing their applications across various 6G technological domains. By highlighting these innovations, this issue aims to provide valuable insights into the pivotal role of AI in shaping the future of 6G networks.
{"title":"Editorial SI on Advances in AI for 6G Networks","authors":"Hatim Chergui;Kamel Tourki;Jun Wu","doi":"10.1109/LNET.2024.3519937","DOIUrl":"https://doi.org/10.1109/LNET.2024.3519937","url":null,"abstract":"The advent of 6G networks heralds a new era of telecommunications characterized by unparalleled connectivity, ultra-low latency, and immersive applications such as holographic communication and Industry 5.0. However, these advancements also introduce significant complexities in network management and service orchestration. This Special Issue of IEEE N<sc>etworking</small> L<sc>etters</small> explores cutting-edge research on Artificial Intelligence (AI)-driven automation techniques designed to address these challenges. The selected works span a diverse array of AI paradigms—ranging from generative AI (GenAI) and reinforcement learning to multi-agent systems and federated learning—showcasing their applications across various 6G technological domains. By highlighting these innovations, this issue aims to provide valuable insights into the pivotal role of AI in shaping the future of 6G networks.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"215-216"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10880116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1109/LNET.2025.3540268
Kai Li;Yilei Liang;Xin Yuan;Wei Ni;Jon Crowcroft;Chau Yuen;Ozgur B. Akan
This letter puts forth a new hybrid horizontal-vertical federated learning (HoVeFL) for mobile edge computing-enabled Internet of Things (EdgeIoT). In this framework, certain EdgeIoT devices train local models using the same data samples but analyze disparate data features, while the others focus on the same features using non-independent and identically distributed (non-IID) data samples. Thus, even though the data features are consistent, the data samples vary across devices. The proposed HoVeFL formulates the training of local and global models to minimize the global loss function. Performance evaluations on CIFAR-10 and SVHN datasets reveal that the testing loss of HoVeFL with 12 horizontal FL devices and six vertical FL devices is 5.5% and 25.2% higher, respectively, compared to a setup with six horizontal FL devices and 12 vertical FL devices.
{"title":"A Novel Framework of Horizontal-Vertical Hybrid Federated Learning for EdgeIoT","authors":"Kai Li;Yilei Liang;Xin Yuan;Wei Ni;Jon Crowcroft;Chau Yuen;Ozgur B. Akan","doi":"10.1109/LNET.2025.3540268","DOIUrl":"https://doi.org/10.1109/LNET.2025.3540268","url":null,"abstract":"This letter puts forth a new hybrid horizontal-vertical federated learning (HoVeFL) for mobile edge computing-enabled Internet of Things (EdgeIoT). In this framework, certain EdgeIoT devices train local models using the same data samples but analyze disparate data features, while the others focus on the same features using non-independent and identically distributed (non-IID) data samples. Thus, even though the data features are consistent, the data samples vary across devices. The proposed HoVeFL formulates the training of local and global models to minimize the global loss function. Performance evaluations on CIFAR-10 and SVHN datasets reveal that the testing loss of HoVeFL with 12 horizontal FL devices and six vertical FL devices is 5.5% and 25.2% higher, respectively, compared to a setup with six horizontal FL devices and 12 vertical FL devices.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 2","pages":"83-87"},"PeriodicalIF":0.0,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1109/LNET.2025.3539810
Ahmed Elbakary;Chaouki Ben Issaid;Tamer ElBatt;Karim Seddik;Mehdi Bennis
In this letter, we introduce a method for fine-tuning Large Language Models (LLMs), inspired by Multi-Task learning in a federated manner. Our approach leverages the structure of each client’s model and enables a learning scheme that considers other clients’ tasks and data distribution. To mitigate the extensive computational and communication overhead often associated with LLMs, we utilize a parameter-efficient fine-tuning method, specifically Low-Rank Adaptation (LoRA), to reduce the number of trainable parameters. Experimental results, with different datasets and models, demonstrate the proposed method’s effectiveness compared to existing frameworks for federated fine-tuning of LLMs in terms of global and local performances. The proposed scheme outperforms existing baselines by achieving lower local loss for each client, while maintaining comparable global performance.
{"title":"MIRA: A Method of Federated Multi-Task Learning for Large Language Models","authors":"Ahmed Elbakary;Chaouki Ben Issaid;Tamer ElBatt;Karim Seddik;Mehdi Bennis","doi":"10.1109/LNET.2025.3539810","DOIUrl":"https://doi.org/10.1109/LNET.2025.3539810","url":null,"abstract":"In this letter, we introduce a method for fine-tuning Large Language Models (LLMs), inspired by Multi-Task learning in a federated manner. Our approach leverages the structure of each client’s model and enables a learning scheme that considers other clients’ tasks and data distribution. To mitigate the extensive computational and communication overhead often associated with LLMs, we utilize a parameter-efficient fine-tuning method, specifically Low-Rank Adaptation (LoRA), to reduce the number of trainable parameters. Experimental results, with different datasets and models, demonstrate the proposed method’s effectiveness compared to existing frameworks for federated fine-tuning of LLMs in terms of global and local performances. The proposed scheme outperforms existing baselines by achieving lower local loss for each client, while maintaining comparable global performance.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"171-175"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10877919","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-07DOI: 10.1109/LNET.2025.3539829
Hongye Quan;Wanli Ni;Tong Zhang;Xiangyu Ye;Ziyi Xie;Shuai Wang;Yuanwei Liu;Hui Song
Using commercial software for radio map generation and wireless network planning often require complex manual operations, posing significant challenges in terms of scalability, adaptability, and user-friendliness, due to heavy manual operations. To address these issues, we propose an automated solution that employs large language model (LLM) agents. These agents are designed to autonomously generate radio maps and facilitate wireless network planning for specified areas, thereby minimizing the necessity for extensive manual intervention. To validate the effectiveness of our proposed solution, we develop a software platform that integrates LLM agents. Experimental results demonstrate that a large amount manual operations can be saved via the proposed LLM agent, and the automated solutions can achieve an enhanced coverage and signal-to-interference-noise ratio (SINR), especially in urban environments.
{"title":"Large Language Model Agents for Radio Map Generation and Wireless Network Planning","authors":"Hongye Quan;Wanli Ni;Tong Zhang;Xiangyu Ye;Ziyi Xie;Shuai Wang;Yuanwei Liu;Hui Song","doi":"10.1109/LNET.2025.3539829","DOIUrl":"https://doi.org/10.1109/LNET.2025.3539829","url":null,"abstract":"Using commercial software for radio map generation and wireless network planning often require complex manual operations, posing significant challenges in terms of scalability, adaptability, and user-friendliness, due to heavy manual operations. To address these issues, we propose an automated solution that employs large language model (LLM) agents. These agents are designed to autonomously generate radio maps and facilitate wireless network planning for specified areas, thereby minimizing the necessity for extensive manual intervention. To validate the effectiveness of our proposed solution, we develop a software platform that integrates LLM agents. Experimental results demonstrate that a large amount manual operations can be saved via the proposed LLM agent, and the automated solutions can achieve an enhanced coverage and signal-to-interference-noise ratio (SINR), especially in urban environments.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"166-170"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145352044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Crowdsensing Networks, the freshness of sensing data is critical for accurate analysis and reliable decisions, which is measured by Age of Information (AoI). However, the Sensing Users (SUs) are reluctant to execute frequent sensing without any incentive, since they incur not only the inevitable energy consumption but also the potential privacy leakage. Adopting Differential Privacy (DP) can effectively protect the privacy of SUs, through it reduces the AoI performance. To address this issue, we propose a freshness-aware privacy-preserving incentive mechanism to balance the trade-off between data value and privacy. SUs are classified with different update cycles, while the Sensing Platform (SP) is unknown about the information. Therefore, we design a contract to solve the information asymmetry problem, which is proved to be optimal and truth-telling. Finally, numerical results demonstrate that the proposed contract is feasible and achieves a utility for the SP when compared with other mechanisms.
在众传感网络中,传感数据的新鲜度对准确分析和可靠决策至关重要,这是由信息时代(Age of Information, AoI)衡量的。然而,在没有任何激励的情况下,传感用户不愿意频繁地执行传感,因为这不仅会带来不可避免的能源消耗,而且还会带来潜在的隐私泄露。采用差分隐私(DP)可以通过降低AoI性能来有效地保护单个用户的隐私。为了解决这一问题,我们提出了一种新鲜度感知的隐私保护激励机制,以平衡数据价值和隐私之间的权衡。SUs根据不同的更新周期进行分类,而感知平台(SP)对信息一无所知。因此,我们设计了一个解决信息不对称问题的契约,并证明了该契约是最优的和真实的。最后,数值计算结果表明,该契约是可行的,与其他机制相比,该契约具有一定的效用。
{"title":"AoI-Aware and Privacy Protection Incentive Mechanism for Crowdsensing Networks","authors":"Xuying Zhou;Jingyi Xu;Wenqian Zhou;Dusit Niyato;Chau Yuen","doi":"10.1109/LNET.2025.3538172","DOIUrl":"https://doi.org/10.1109/LNET.2025.3538172","url":null,"abstract":"In Crowdsensing Networks, the freshness of sensing data is critical for accurate analysis and reliable decisions, which is measured by Age of Information (AoI). However, the Sensing Users (SUs) are reluctant to execute frequent sensing without any incentive, since they incur not only the inevitable energy consumption but also the potential privacy leakage. Adopting Differential Privacy (DP) can effectively protect the privacy of SUs, through it reduces the AoI performance. To address this issue, we propose a freshness-aware privacy-preserving incentive mechanism to balance the trade-off between data value and privacy. SUs are classified with different update cycles, while the Sensing Platform (SP) is unknown about the information. Therefore, we design a contract to solve the information asymmetry problem, which is proved to be optimal and truth-telling. Finally, numerical results demonstrate that the proposed contract is feasible and achieves a utility for the SP when compared with other mechanisms.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 2","pages":"98-102"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-22DOI: 10.1109/LNET.2025.3532765
Alexandre Guitton;Megumi Kaneko
The literature has shown the drastic decrease of the achievable LoRa network throughput, due to the limited number of demodulators that are supported by LoRaWAN gateways. Unlike existing approaches, in this letter, we design fairness-aware algorithms under this stringent constraint. By taking the efficient demodulation time ratio as a fairness metric, our algorithms enable to prioritize frames with larger spreading factors, while increasing the total demodulation time thanks to collaboration among gateways. Numerical results demonstrate that our proposed methods largely outperform LoRaWAN baselines, while closely approaching their performance upper bounds.
{"title":"Fairness-Aware Demodulator Allocation in LoRa Multi-Gateway Networks","authors":"Alexandre Guitton;Megumi Kaneko","doi":"10.1109/LNET.2025.3532765","DOIUrl":"https://doi.org/10.1109/LNET.2025.3532765","url":null,"abstract":"The literature has shown the drastic decrease of the achievable LoRa network throughput, due to the limited number of demodulators that are supported by LoRaWAN gateways. Unlike existing approaches, in this letter, we design fairness-aware algorithms under this stringent constraint. By taking the efficient demodulation time ratio as a fairness metric, our algorithms enable to prioritize frames with larger spreading factors, while increasing the total demodulation time thanks to collaboration among gateways. Numerical results demonstrate that our proposed methods largely outperform LoRaWAN baselines, while closely approaching their performance upper bounds.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 2","pages":"93-97"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144308357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-16DOI: 10.1109/LNET.2025.3530430
Zuguang Li;Shaohua Wu;Liang Li;Songge Zhang
In this letter, we propose an energy-efficient split learning (SL) framework for fine-tuning large language models (LLMs) using geo-distributed personal data at the network edge, where LLMs are split and alternately across massive mobile devices and an edge server. Considering the device heterogeneity and channel dynamics in edge networks, a Cut lAyer and computing Resource Decision (CARD) algorithm is developed to minimize training delay and energy consumption. Simulation results demonstrate that the proposed approach reduces the average training delay and server’s energy consumption by 70.8% and 53.1%, compared to the benchmarks, respectively.
{"title":"Energy-Efficient Split Learning for Fine-Tuning Large Language Models in Edge Networks","authors":"Zuguang Li;Shaohua Wu;Liang Li;Songge Zhang","doi":"10.1109/LNET.2025.3530430","DOIUrl":"https://doi.org/10.1109/LNET.2025.3530430","url":null,"abstract":"In this letter, we propose an energy-efficient split learning (SL) framework for fine-tuning large language models (LLMs) using geo-distributed personal data at the network edge, where LLMs are split and alternately across massive mobile devices and an edge server. Considering the device heterogeneity and channel dynamics in edge networks, a Cut lAyer and computing Resource Decision (CARD) algorithm is developed to minimize training delay and energy consumption. Simulation results demonstrate that the proposed approach reduces the average training delay and server’s energy consumption by 70.8% and 53.1%, compared to the benchmarks, respectively.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"7 3","pages":"176-180"},"PeriodicalIF":0.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145351874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}