首页 > 最新文献

Journal of Information and Intelligence最新文献

英文 中文
Towards transparent 6G AI-RAN: A survey on explainable deep reinforcement learning for intelligent network slicing 走向透明的6G AI-RAN:智能网络切片可解释深度强化学习研究综述
Pub Date : 2026-01-01 Epub Date: 2025-12-29 DOI: 10.1016/j.jiixd.2025.12.005
Shuaishuai Guo , Yutong Zhong , Zhenyu Feng , Shengqi Kang , Jichao Chen
The advent of the sixth generation (6G) wireless networks envisions an Artificial Intelligence (AI)-native Radio Access Network (AI-RAN), where Deep Reinforcement Learning (DRL) emerges as a key enabler for intelligent and autonomous network slicing. Despite the demonstrated performance gains of DRL-based solutions in dynamic resource allocation and slice orchestration, their opaque decision-making nature raises critical concerns regarding trust, accountability, and operational deployment. To bridge this gap, Explainable Deep Reinforcement Learning (XDRL) has recently attracted significant attention as a means to enhance transparency, interpretability, and controllability of AI-RAN slicing policies. This survey provides a comprehensive overview of the state of the art in explainable DRL for intelligent network slicing. We first review the fundamental principles of DRL in the context of RAN slicing and identify the unique explainability challenges posed by high-dimensional, multi-slice environments. We then categorize existing XDRL approaches into post-hoc explanation, symbolic abstraction, and human-in-the-loop steering, analyzing their methodologies, strengths, and limitations. Furthermore, we highlight benchmark environments and experimental testbeds that have been employed to evaluate XDRL in realistic network scenarios. Finally, we outline key open challenges, including scalability, generalization across traffic patterns, integration with Large Language Models (LLMs), and alignment with intent-based networking, and discuss promising research directions toward achieving transparent, trustworthy, and human-centric AI-RAN in 6G.
第六代(6G)无线网络的出现设想了人工智能(AI)原生无线接入网络(AI- ran),其中深度强化学习(DRL)成为智能和自主网络切片的关键推动因素。尽管基于drl的解决方案在动态资源分配和切片编排方面表现出了性能提升,但它们不透明的决策本质引发了对信任、问责制和操作部署的关键关注。为了弥补这一差距,可解释深度强化学习(XDRL)作为提高AI-RAN切片策略的透明度、可解释性和可控性的一种手段,最近引起了人们的极大关注。本调查提供了智能网络切片的可解释DRL技术的全面概述。我们首先回顾了RAN切片背景下DRL的基本原理,并确定了高维、多切片环境所带来的独特的可解释性挑战。然后,我们将现有的XDRL方法分为事后解释、符号抽象和人在循环控制,并分析了它们的方法、优势和局限性。此外,我们重点介绍了在实际网络场景中用于评估XDRL的基准测试环境和实验测试平台。最后,我们概述了关键的开放挑战,包括可扩展性、跨流量模式的通用性、与大型语言模型(llm)的集成以及与基于意图的网络的一致性,并讨论了在6G中实现透明、可信赖和以人为中心的AI-RAN的有前途的研究方向。
{"title":"Towards transparent 6G AI-RAN: A survey on explainable deep reinforcement learning for intelligent network slicing","authors":"Shuaishuai Guo ,&nbsp;Yutong Zhong ,&nbsp;Zhenyu Feng ,&nbsp;Shengqi Kang ,&nbsp;Jichao Chen","doi":"10.1016/j.jiixd.2025.12.005","DOIUrl":"10.1016/j.jiixd.2025.12.005","url":null,"abstract":"<div><div>The advent of the sixth generation (6G) wireless networks envisions an Artificial Intelligence (AI)-native Radio Access Network (AI-RAN), where Deep Reinforcement Learning (DRL) emerges as a key enabler for intelligent and autonomous network slicing. Despite the demonstrated performance gains of DRL-based solutions in dynamic resource allocation and slice orchestration, their opaque decision-making nature raises critical concerns regarding trust, accountability, and operational deployment. To bridge this gap, Explainable Deep Reinforcement Learning (XDRL) has recently attracted significant attention as a means to enhance transparency, interpretability, and controllability of AI-RAN slicing policies. This survey provides a comprehensive overview of the state of the art in explainable DRL for intelligent network slicing. We first review the fundamental principles of DRL in the context of RAN slicing and identify the unique explainability challenges posed by high-dimensional, multi-slice environments. We then categorize existing XDRL approaches into post-hoc explanation, symbolic abstraction, and human-in-the-loop steering, analyzing their methodologies, strengths, and limitations. Furthermore, we highlight benchmark environments and experimental testbeds that have been employed to evaluate XDRL in realistic network scenarios. Finally, we outline key open challenges, including scalability, generalization across traffic patterns, integration with Large Language Models (LLMs), and alignment with intent-based networking, and discuss promising research directions toward achieving transparent, trustworthy, and human-centric AI-RAN in 6G.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"4 1","pages":"Pages 23-37"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147416191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence-native Radio Access Networks (AI-RAN): Foundations, methodologies, and applications 人工智能原生无线接入网络(AI-RAN):基础、方法和应用
Pub Date : 2026-01-01 Epub Date: 2026-01-02 DOI: 10.1016/j.jiixd.2025.12.012
Chenxi Liu, Howard H. Yang, Kun Guo, Wenchao Xia, Chenyuan Feng, Tony Q.S. Quek
{"title":"Artificial Intelligence-native Radio Access Networks (AI-RAN): Foundations, methodologies, and applications","authors":"Chenxi Liu,&nbsp;Howard H. Yang,&nbsp;Kun Guo,&nbsp;Wenchao Xia,&nbsp;Chenyuan Feng,&nbsp;Tony Q.S. Quek","doi":"10.1016/j.jiixd.2025.12.012","DOIUrl":"10.1016/j.jiixd.2025.12.012","url":null,"abstract":"","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"4 1","pages":"Pages 1-4"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147416183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast collaborative inference via distributed speculative decoding 基于分布式推测解码的快速协同推理
Pub Date : 2026-01-01 Epub Date: 2026-01-10 DOI: 10.1016/j.jiixd.2025.12.008
Ce Zheng , Ke Zhang , Chen Sun , Wenqi Zhang , Qiong Liu , Angesom Ataklity Tesfay
Speculative decoding accelerates Large Language Model (LLM) inference by allowing a lightweight draft model to predict multiple future tokens that are subsequently verified by a larger target model. In AI-native Radio Access Networks (AI-RAN), this mechanism naturally enables device-edge collaborative inference. However, existing distributed speculative decoding schemes incur significant uplink communication overhead, as they require transmitting full-vocabulary logits at every decoding step. To address this challenge, we propose a sparsify-then-sample strategy, termed Truncated Sparse Logits Transmission (TSLT), which transmits only the logits and indices of a truncated candidate set. We provide theoretical guarantees showing that TSLT preserves the acceptance rate of speculative decoding. The proposed framework is further extended to a multi-candidate setting, where multiple draft candidates per step increase the acceptance probability. Extensive experiments demonstrate that TSLT substantially reduces uplink communication while maintaining end-to-end inference latency and model quality, validating its effectiveness for scalable and communication-efficient distributed LLM inference in future AI-RAN systems.
推测解码通过允许轻量级草案模型预测多个未来令牌,从而加速大型语言模型(LLM)推理,这些令牌随后由更大的目标模型验证。在人工智能本地无线接入网络(AI-RAN)中,这种机制自然地实现了设备边缘的协作推理。然而,现有的分布式推测解码方案会产生很大的上行通信开销,因为它们需要在每个解码步骤传输全词汇表逻辑。为了解决这一挑战,我们提出了一种稀疏采样策略,称为截断稀疏Logits传输(TSLT),它只传输截断的候选集的Logits和索引。我们提供了理论保证,表明TSLT保留了推测解码的接受率。建议的框架进一步扩展到多候选设置,其中每个步骤的多个候选草案增加了接受概率。大量实验表明,TSLT在保持端到端推理延迟和模型质量的同时大幅减少了上行通信,验证了其在未来AI-RAN系统中可扩展和通信高效的分布式LLM推理的有效性。
{"title":"Fast collaborative inference via distributed speculative decoding","authors":"Ce Zheng ,&nbsp;Ke Zhang ,&nbsp;Chen Sun ,&nbsp;Wenqi Zhang ,&nbsp;Qiong Liu ,&nbsp;Angesom Ataklity Tesfay","doi":"10.1016/j.jiixd.2025.12.008","DOIUrl":"10.1016/j.jiixd.2025.12.008","url":null,"abstract":"<div><div>Speculative decoding accelerates Large Language Model (LLM) inference by allowing a lightweight draft model to predict multiple future tokens that are subsequently verified by a larger target model. In AI-native Radio Access Networks (AI-RAN), this mechanism naturally enables device-edge collaborative inference. However, existing distributed speculative decoding schemes incur significant uplink communication overhead, as they require transmitting full-vocabulary logits at every decoding step. To address this challenge, we propose a sparsify-then-sample strategy, termed Truncated Sparse Logits Transmission (TSLT), which transmits only the logits and indices of a truncated candidate set. We provide theoretical guarantees showing that TSLT preserves the acceptance rate of speculative decoding. The proposed framework is further extended to a multi-candidate setting, where multiple draft candidates per step increase the acceptance probability. Extensive experiments demonstrate that TSLT substantially reduces uplink communication while maintaining end-to-end inference latency and model quality, validating its effectiveness for scalable and communication-efficient distributed LLM inference in future AI-RAN systems.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"4 1","pages":"Pages 67-85"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147416193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QoAIS-guaranteed AI service offloading in IoV scenario enabled by 6G native AI network 6G原生AI网络实现车联网场景下qoais保证的AI业务卸载
Pub Date : 2026-01-01 Epub Date: 2025-12-16 DOI: 10.1016/j.jiixd.2025.12.003
Guangyi Liu, Xinyao Wang, Tianjiao Chen, Yaru Li, Jing Jin
The emergence of 6G native AI network offers new opportunities to support accuracy-critical and latency-sensitive AI inference tasks in Internet-of-Vehicles (IoV) scenarios. However, existing computing offloading schemes often treat the network merely as a communication pipeline or edge computing node, lacking the joint scheduling of the communication resource and the computational resource in a native AI infrastructure. Besides, the deterioration of the Vehicle-to-Infrastructure (V2I) channel is likely to decrease the inference accuracy of AI tasks by decreasing the quality of data offloading. To this end, this paper considers a road lane detection task in IoV scenario and proposes a Quality of AI Service (QoAIS)-guaranteed AI service offloading architecture. Firstly, the functionality between inference accuracy and Signal-to-Noise Ratio (SNR) under different Modulation and Coding Schemes (MCS) is established by the numerical experiments. On this basis, a cross-layer optimization framework is introduced to maximize the number of QoAIS-guaranteed tasks by jointly optimizes MCS selection, uplink bandwidth allocation, and computing resource allocation, simultaneously. A Particle Swarm Optimization (PSO) algorithm is introduced to solve this problem. Numerical results show that the proposed PSO algorithm can significantly improve the number of QoAIS-guaranteed tasks compared with that of the baselines.
6G原生人工智能网络的出现为支持车联网(IoV)场景中对准确性和延迟敏感的人工智能推理任务提供了新的机会。然而,现有的计算卸载方案往往将网络仅仅视为通信管道或边缘计算节点,缺乏对原生AI基础设施中通信资源和计算资源的联合调度。此外,车辆到基础设施(V2I)通道的恶化可能会降低数据卸载的质量,从而降低人工智能任务的推理精度。为此,本文考虑了车联网场景下的车道检测任务,提出了一种保证人工智能服务质量(QoAIS)的人工智能服务卸载架构。首先,通过数值实验建立了不同调制编码方案下推理精度与信噪比的函数关系。在此基础上,引入跨层优化框架,通过同时对MCS选择、上行带宽分配和计算资源分配进行联合优化,实现qos保证任务数量最大化。引入粒子群优化算法(PSO)来解决这一问题。数值结果表明,与基线相比,所提出的粒子群算法可以显著提高qoais保证任务的数量。
{"title":"QoAIS-guaranteed AI service offloading in IoV scenario enabled by 6G native AI network","authors":"Guangyi Liu,&nbsp;Xinyao Wang,&nbsp;Tianjiao Chen,&nbsp;Yaru Li,&nbsp;Jing Jin","doi":"10.1016/j.jiixd.2025.12.003","DOIUrl":"10.1016/j.jiixd.2025.12.003","url":null,"abstract":"<div><div>The emergence of 6G native AI network offers new opportunities to support accuracy-critical and latency-sensitive AI inference tasks in Internet-of-Vehicles (IoV) scenarios. However, existing computing offloading schemes often treat the network merely as a communication pipeline or edge computing node, lacking the joint scheduling of the communication resource and the computational resource in a native AI infrastructure. Besides, the deterioration of the Vehicle-to-Infrastructure (V2I) channel is likely to decrease the inference accuracy of AI tasks by decreasing the quality of data offloading. To this end, this paper considers a road lane detection task in IoV scenario and proposes a Quality of AI Service (QoAIS)-guaranteed AI service offloading architecture. Firstly, the functionality between inference accuracy and Signal-to-Noise Ratio (SNR) under different Modulation and Coding Schemes (MCS) is established by the numerical experiments. On this basis, a cross-layer optimization framework is introduced to maximize the number of QoAIS-guaranteed tasks by jointly optimizes MCS selection, uplink bandwidth allocation, and computing resource allocation, simultaneously. A Particle Swarm Optimization (PSO) algorithm is introduced to solve this problem. Numerical results show that the proposed PSO algorithm can significantly improve the number of QoAIS-guaranteed tasks compared with that of the baselines.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"4 1","pages":"Pages 54-66"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147416192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-RAN: The pathway to future wireless networks AI-RAN:通往未来无线网络的道路
Pub Date : 2026-01-01 Epub Date: 2026-01-13 DOI: 10.1016/j.jiixd.2025.12.013
Chenyuan Feng , Howard H. Yang , Kun Guo , Wenchao Xia , Chenxi Liu , Tony Q.S. Quek
With the rapid advancement of Artificial Intelligence (AI), the Radio Access Network (RAN) is poised to undergo a transformative evolution toward the convergence of AI and RAN functionalities, referred to as the AI-RAN paradigm. AI-RAN integrates high-performance computing resources into RAN infrastructures, thereby enabling the execution of both AI and RAN workloads on the same infrastructure. As a result, it improves resource utilization, reduces energy consumption, and promotes swift AI-related responses. In this paper, we provide a comprehensive overview of AI-RAN, whereby we broadly categorize the discussion into three aspects: AI and RAN, AI for RAN, and AI on RAN. In particular, we begin with AI and RAN, which encompass the hardware architecture, software stack, as well as orchestration of computational and communication resources. We subsequently elaborate on the AI on RAN, examining various approaches to leveraging AI methods to enhance RAN performance. For the topic of AI on RAN, we conduct an in-depth investigation into the schemes that take RAN as a platform to facilitate AI services, where we review distributed learning for multi-cell and multi-vendor RANs, including federated and multi-agent reinforcement learning, highlighting issues of data heterogeneity, control-plane overhead, convergence under mobility, privacy, and adversarial robustness in the RAN ecosystems. We also demonstrate several use cases pertaining to the AI-RAN framework. We conclude by outlining key open issues and research directions.
随着人工智能(AI)的快速发展,无线接入网(RAN)将经历一场变革性的演变,朝着AI和RAN功能的融合发展,称为AI-RAN范式。AI-RAN将高性能计算资源集成到RAN基础设施中,从而实现在同一基础设施上同时执行AI和RAN工作负载。因此,它提高了资源利用率,降低了能源消耗,并促进了与人工智能相关的快速响应。在本文中,我们提供了对AI-RAN的全面概述,其中我们将讨论大致分为三个方面:AI和RAN, AI for RAN和AI on RAN。特别是,我们从AI和RAN开始,它们包括硬件架构,软件堆栈以及计算和通信资源的编排。随后,我们详细介绍了RAN上的人工智能,研究了利用人工智能方法提高RAN性能的各种方法。对于RAN上的AI主题,我们对以RAN为平台促进AI服务的方案进行了深入调查,其中我们回顾了多单元和多供应商RAN的分布式学习,包括联邦和多代理强化学习,突出了RAN生态系统中的数据异构、控制平面开销、移动性下的收敛、隐私和对抗性鲁棒性问题。我们还演示了与AI-RAN框架相关的几个用例。最后,我们概述了关键的开放性问题和研究方向。
{"title":"AI-RAN: The pathway to future wireless networks","authors":"Chenyuan Feng ,&nbsp;Howard H. Yang ,&nbsp;Kun Guo ,&nbsp;Wenchao Xia ,&nbsp;Chenxi Liu ,&nbsp;Tony Q.S. Quek","doi":"10.1016/j.jiixd.2025.12.013","DOIUrl":"10.1016/j.jiixd.2025.12.013","url":null,"abstract":"<div><div>With the rapid advancement of Artificial Intelligence (AI), the Radio Access Network (RAN) is poised to undergo a transformative evolution toward the convergence of AI and RAN functionalities, referred to as the AI-RAN paradigm. AI-RAN integrates high-performance computing resources into RAN infrastructures, thereby enabling the execution of both AI and RAN workloads on the same infrastructure. As a result, it improves resource utilization, reduces energy consumption, and promotes swift AI-related responses. In this paper, we provide a comprehensive overview of AI-RAN, whereby we broadly categorize the discussion into three aspects: AI and RAN, AI for RAN, and AI on RAN. In particular, we begin with AI and RAN, which encompass the hardware architecture, software stack, as well as orchestration of computational and communication resources. We subsequently elaborate on the AI on RAN, examining various approaches to leveraging AI methods to enhance RAN performance. For the topic of AI on RAN, we conduct an in-depth investigation into the schemes that take RAN as a platform to facilitate AI services, where we review distributed learning for multi-cell and multi-vendor RANs, including federated and multi-agent reinforcement learning, highlighting issues of data heterogeneity, control-plane overhead, convergence under mobility, privacy, and adversarial robustness in the RAN ecosystems. We also demonstrate several use cases pertaining to the AI-RAN framework. We conclude by outlining key open issues and research directions.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"4 1","pages":"Pages 5-22"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147416189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-RAN resource configuration for non-collaborative cross-domain slicing 非协同跨域切片的AI-RAN资源配置
Pub Date : 2026-01-01 Epub Date: 2025-12-23 DOI: 10.1016/j.jiixd.2025.12.001
Ruihan Wen , Gang Feng , Chengjie Li , Haokang Lou , Qiping Xu , Tao Liu , Zihan Chen
In the evolution from 5G to 6G, network slicing faces challenges due to the coexistence of multiple operators and service providers, where privacy constraints on configuration data and parameters restrict end-to-end slice deployment. This issue is more pronounced in the 6G AI-RAN architecture, especially when cross-domain coordination between the Radio Access Network (RAN) and Core Network (CN) is insufficient, leading to inefficient slice orchestration and management. To address these challenges, we propose three solutions: (1) A non-collaborative cross-domain slicing orchestration architecture for 6G AI-RAN, enabling an adaptive orchestration framework driven by slice metric prediction; (2) a cross-domain Key Performance Indicator (KPI) prediction mechanism combining Graph Convolutional and Attention Networks (GCN-GAT) with a Transformer model to analyze the impact of cross-domain features on RAN slice performance; and (3) an intelligent RAN slice resource orchestration strategy driven by metric prediction. Experiments on real multi-domain datasets demonstrate that the proposed framework outperforms baseline methods by improving end-to-end slice service quality, while significantly enhancing system throughput and reducing both resource overhead and average load.
在5G向6G演进的过程中,由于多个运营商和服务提供商共存,网络切片面临挑战,配置数据和参数的隐私限制限制了端到端切片的部署。这个问题在6G AI-RAN架构中更为明显,特别是当无线接入网(RAN)和核心网(CN)之间的跨域协调不足时,导致切片编排和管理效率低下。为了应对这些挑战,我们提出了三种解决方案:(1)针对6G AI-RAN的非协作跨域切片编排架构,实现由切片度量预测驱动的自适应编排框架;(2)结合Graph Convolutional and Attention Networks (GCN-GAT)和Transformer模型,建立跨域关键性能指标(KPI)预测机制,分析跨域特征对RAN切片性能的影响;(3)基于度量预测的智能RAN片资源编排策略。在真实多域数据集上的实验表明,该框架在提高端到端切片服务质量的同时,显著提高了系统吞吐量,降低了资源开销和平均负载,优于基线方法。
{"title":"AI-RAN resource configuration for non-collaborative cross-domain slicing","authors":"Ruihan Wen ,&nbsp;Gang Feng ,&nbsp;Chengjie Li ,&nbsp;Haokang Lou ,&nbsp;Qiping Xu ,&nbsp;Tao Liu ,&nbsp;Zihan Chen","doi":"10.1016/j.jiixd.2025.12.001","DOIUrl":"10.1016/j.jiixd.2025.12.001","url":null,"abstract":"<div><div>In the evolution from 5G to 6G, network slicing faces challenges due to the coexistence of multiple operators and service providers, where privacy constraints on configuration data and parameters restrict end-to-end slice deployment. This issue is more pronounced in the 6G AI-RAN architecture, especially when cross-domain coordination between the Radio Access Network (RAN) and Core Network (CN) is insufficient, leading to inefficient slice orchestration and management. To address these challenges, we propose three solutions: (1) A non-collaborative cross-domain slicing orchestration architecture for 6G AI-RAN, enabling an adaptive orchestration framework driven by slice metric prediction; (2) a cross-domain Key Performance Indicator (KPI) prediction mechanism combining Graph Convolutional and Attention Networks (GCN-GAT) with a Transformer model to analyze the impact of cross-domain features on RAN slice performance; and (3) an intelligent RAN slice resource orchestration strategy driven by metric prediction. Experiments on real multi-domain datasets demonstrate that the proposed framework outperforms baseline methods by improving end-to-end slice service quality, while significantly enhancing system throughput and reducing both resource overhead and average load.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"4 1","pages":"Pages 38-53"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147416190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Capacity enhancement in multirelay-assisted hybrid SWIPT wireless communications 多中继辅助混合SWIPT无线通信的容量增强
Pub Date : 2025-11-01 Epub Date: 2025-04-04 DOI: 10.1016/j.jiixd.2025.01.001
Xuan Wang , Danyang Yu , Yi Liu
In this paper, we propose an advanced multirelay-assisted hybrid (M-AH) simultaneous wireless information and power transfer (SWIPT) scheme to enhance the capacity in wireless communication systems. With the proposed scheme, the harvested energy at the relays within the same cluster can be utilized to improve the service quality of the optimal relay. Notably, the optimal relay is determined through an opportunistic relay selection approach. Moreover, we introduce a four-phase transmission strategy and develop an iterative optimization algorithm to maximize the system capacity (SC) while considering time slot and power constraints. The simulation results demonstrate that our proposed scheme outperforms existing schemes.
本文提出了一种先进的多中继辅助混合(M-AH)同步无线信息和功率传输(SWIPT)方案,以提高无线通信系统的容量。利用所提出的方案,可以利用同一集群内继电器收集的能量来提高最优继电器的服务质量。值得注意的是,最佳中继是通过机会中继选择方法确定的。此外,我们引入了一种四相传输策略,并开发了一种迭代优化算法,在考虑时隙和功率约束的情况下最大化系统容量(SC)。仿真结果表明,该方案优于现有方案。
{"title":"Capacity enhancement in multirelay-assisted hybrid SWIPT wireless communications","authors":"Xuan Wang ,&nbsp;Danyang Yu ,&nbsp;Yi Liu","doi":"10.1016/j.jiixd.2025.01.001","DOIUrl":"10.1016/j.jiixd.2025.01.001","url":null,"abstract":"<div><div>In this paper, we propose an advanced multirelay-assisted hybrid (M-AH) simultaneous wireless information and power transfer (SWIPT) scheme to enhance the capacity in wireless communication systems. With the proposed scheme, the harvested energy at the relays within the same cluster can be utilized to improve the service quality of the optimal relay. Notably, the optimal relay is determined through an opportunistic relay selection approach. Moreover, we introduce a four-phase transmission strategy and develop an iterative optimization algorithm to maximize the system capacity (SC) while considering time slot and power constraints. The simulation results demonstrate that our proposed scheme outperforms existing schemes.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 6","pages":"Pages 504-514"},"PeriodicalIF":0.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Roughness-informed machine learning – A call for fractal and fractional calculi 基于粗糙的机器学习——对分形和分数微积分的呼唤
Pub Date : 2025-11-01 Epub Date: 2025-09-11 DOI: 10.1016/j.jiixd.2025.09.001
Mohammad Partohaghighi , Roummel F. Marcia , Bruce J. West , YangQuan Chen
This paper presents a unified framework for roughness-informed machine learning, dividing roughness into four categories: statistical, geometric, manifold, and topological. Statistical roughness, analyzed with tools like WeightWatcher, utilizes heavy-tailed weight distributions. Geometric roughness, measured by a novel roughness index, quantifies oscillatory patterns in loss landscapes. Manifold roughness, captured by the two-scale effective dimension, integrates local geometry (via fisher information matrix) with global parameter space complexity. Topological roughness, derived from persistence diagrams, evaluates structural complexity of learned functions. Experiments on MNIST, CIFAR-10, CIFAR-100, a damped harmonic oscillator, fractional order ODE, and wave equation demonstrate the framework's effectiveness: statistical roughness enhances federated learning convergence, geometric roughness improves training stability, manifold roughness optimizes generalization through noise injection, and topological roughness ensures smoother, physically accurate solutions. The framework advances model design, optimization, and generalization, with links to fractal and fractional calculus.
本文提出了一个基于粗糙度的机器学习的统一框架,将粗糙度分为四类:统计、几何、流形和拓扑。使用WeightWatcher等工具分析的统计粗糙度利用了重尾权重分布。几何粗糙度,通过一种新的粗糙度指数来测量,量化了损失景观中的振荡模式。流形粗糙度由两尺度有效维捕获,将局部几何(通过fisher信息矩阵)与全局参数空间复杂性相结合。拓扑粗糙度,源于持久性图,评估学习函数的结构复杂性。在MNIST、CIFAR-10、CIFAR-100、阻尼谐振子、分数阶ODE和波动方程上的实验证明了该框架的有效性:统计粗糙度增强了联邦学习的收敛性,几何粗糙度提高了训练的稳定性,流形粗糙度通过噪声注入优化了泛化,拓扑粗糙度确保了更平滑、物理精确的解。该框架推进了模型设计、优化和泛化,并与分形和分数阶微积分联系起来。
{"title":"Roughness-informed machine learning – A call for fractal and fractional calculi","authors":"Mohammad Partohaghighi ,&nbsp;Roummel F. Marcia ,&nbsp;Bruce J. West ,&nbsp;YangQuan Chen","doi":"10.1016/j.jiixd.2025.09.001","DOIUrl":"10.1016/j.jiixd.2025.09.001","url":null,"abstract":"<div><div>This paper presents a unified framework for roughness-informed machine learning, dividing roughness into four categories: statistical, geometric, manifold, and topological. Statistical roughness, analyzed with tools like WeightWatcher, utilizes heavy-tailed weight distributions. Geometric roughness, measured by a novel roughness index, quantifies oscillatory patterns in loss landscapes. Manifold roughness, captured by the two-scale effective dimension, integrates local geometry (via fisher information matrix) with global parameter space complexity. Topological roughness, derived from persistence diagrams, evaluates structural complexity of learned functions. Experiments on MNIST, CIFAR-10, CIFAR-100, a damped harmonic oscillator, fractional order ODE, and wave equation demonstrate the framework's effectiveness: statistical roughness enhances federated learning convergence, geometric roughness improves training stability, manifold roughness optimizes generalization through noise injection, and topological roughness ensures smoother, physically accurate solutions. The framework advances model design, optimization, and generalization, with links to fractal and fractional calculus.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 6","pages":"Pages 463-480"},"PeriodicalIF":0.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal context and representative feature learning for weakly supervised video anomaly detection 弱监督视频异常检测的时间背景和代表性特征学习
Pub Date : 2025-11-01 Epub Date: 2025-07-07 DOI: 10.1016/j.jiixd.2025.06.001
Helei Qiu , Biao Hou
In weakly supervised video anomaly detection (WSVAD) tasks, the temporal relationships of video are crucial for modeling event patterns. Transformer is a commonly used method for modeling temporal relationships. However, due to the large amount of redundancy in videos and the quadratic complexity of the Transformer, this method cannot effectively model long-range information. In addition, most WSVAD methods select key snippets based on predicted scores to represent event patterns, but this paradigm is susceptible to noise interference. To address the above issues, a novel temporal context and representative feature learning (TCRFL) method for WSVAD is proposed. Specifically, a temporal context learning (TCL) module is proposed to utilize both Mamba with linear complexity and Transformer to capture short-range and long-range dependencies of events. In addition, a representative feature learning (RFL) module is proposed to mine representative snippets to capture important information about events, further spreading it to video features to enhance the influence of representative features. The RFL module not only suppresses noise interference but also guides the model to select key snippets more accurately. The experimental results on UCF-Crime, XD-Violence, and ShanghaiTech datasets demonstrate the effectiveness and superiority of our method.
在弱监督视频异常检测(WSVAD)任务中,视频的时间关系对事件模式建模至关重要。Transformer是一种常用的时间关系建模方法。然而,由于视频中的大量冗余和Transformer的二次复杂度,该方法不能有效地建模远程信息。此外,大多数WSVAD方法根据预测分数选择关键片段来表示事件模式,但这种范式容易受到噪声干扰。为了解决上述问题,提出了一种新的WSVAD时序上下文和代表性特征学习(TCRFL)方法。具体来说,提出了一个时间上下文学习(TCL)模块来利用具有线性复杂性的Mamba和Transformer来捕获事件的短期和长期依赖关系。此外,提出了代表性特征学习(RFL)模块,挖掘代表性片段,捕捉事件的重要信息,进一步扩展到视频特征中,增强代表性特征的影响力。RFL模块不仅可以抑制噪声干扰,还可以引导模型更准确地选择关键片段。在UCF-Crime、XD-Violence和ShanghaiTech数据集上的实验结果证明了该方法的有效性和优越性。
{"title":"Temporal context and representative feature learning for weakly supervised video anomaly detection","authors":"Helei Qiu ,&nbsp;Biao Hou","doi":"10.1016/j.jiixd.2025.06.001","DOIUrl":"10.1016/j.jiixd.2025.06.001","url":null,"abstract":"<div><div>In weakly supervised video anomaly detection (WSVAD) tasks, the temporal relationships of video are crucial for modeling event patterns. Transformer is a commonly used method for modeling temporal relationships. However, due to the large amount of redundancy in videos and the quadratic complexity of the Transformer, this method cannot effectively model long-range information. In addition, most WSVAD methods select key snippets based on predicted scores to represent event patterns, but this paradigm is susceptible to noise interference. To address the above issues, a novel temporal context and representative feature learning (TCRFL) method for WSVAD is proposed. Specifically, a temporal context learning (TCL) module is proposed to utilize both Mamba with linear complexity and Transformer to capture short-range and long-range dependencies of events. In addition, a representative feature learning (RFL) module is proposed to mine representative snippets to capture important information about events, further spreading it to video features to enhance the influence of representative features. The RFL module not only suppresses noise interference but also guides the model to select key snippets more accurately. The experimental results on UCF-Crime, XD-Violence, and ShanghaiTech datasets demonstrate the effectiveness and superiority of our method.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 6","pages":"Pages 481-491"},"PeriodicalIF":0.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A DNN-based MIMO signal detector using transformer architecture for next-generation wireless networks 基于dnn的MIMO信号检测器,采用变压器结构,用于下一代无线网络
Pub Date : 2025-11-01 Epub Date: 2025-09-10 DOI: 10.1016/j.jiixd.2025.08.004
Gevira Omondi , Thomas O. Olwal
Multiple input multiple output (MIMO) communication systems have emerged as a key technol-ogy to enhance spectral efficiency and reliability in wireless communications. In recent years, deep neural network (DNN)-based approaches have shown promise in addressing the challenges of MIMO signal detection. Among these approaches, the Transformer architecture, known for its effectiveness in capturing long-range dependencies in sequential data, has gained significant attention. Therefore, this paper proposes a revolutionary DNN-based MIMO signal detection scheme using the Transformer-based architecture. This novel scheme leverages the multi-head self-attention mechanism inherent in Transformer architectures, which enables the model to capture both spatial and temporal dependencies in MIMO channels, thereby improving symbol detection accuracy and robustness under varying channel conditions. The proposed scheme's bit error rate (BER) performance is compared with traditional methods through simulations. The results show that the proposed method achieves a signal-to-noise ratio (SNR) gain of nearly 1.5 ​dB against the traditional detection methods, with the optimal maximum likelihood detector (MLD) only outperforming it ​by ​< ​0.5 ​dB.
多输入多输出(MIMO)通信系统已成为无线通信中提高频谱效率和可靠性的关键技术。近年来,基于深度神经网络(DNN)的方法在解决MIMO信号检测的挑战方面显示出了希望。在这些方法中,以捕获顺序数据中的远程依赖关系的有效性而闻名的Transformer体系结构获得了极大的关注。因此,本文提出了一种革命性的基于dnn的MIMO信号检测方案,该方案采用基于变压器的结构。该方案利用了Transformer架构中固有的多头自关注机制,使模型能够捕获MIMO信道中的空间和时间依赖性,从而提高了不同信道条件下的符号检测精度和鲁棒性。通过仿真比较了该方案的误码率性能。结果表明,与传统检测方法相比,该方法的信噪比(SNR)增益接近1.5 dB,而最优最大似然检测器(MLD)仅优于传统检测方法0.5 dB。
{"title":"A DNN-based MIMO signal detector using transformer architecture for next-generation wireless networks","authors":"Gevira Omondi ,&nbsp;Thomas O. Olwal","doi":"10.1016/j.jiixd.2025.08.004","DOIUrl":"10.1016/j.jiixd.2025.08.004","url":null,"abstract":"<div><div>Multiple input multiple output (MIMO) communication systems have emerged as a key technol-ogy to enhance spectral efficiency and reliability in wireless communications. In recent years, deep neural network (DNN)-based approaches have shown promise in addressing the challenges of MIMO signal detection. Among these approaches, the Transformer architecture, known for its effectiveness in capturing long-range dependencies in sequential data, has gained significant attention. Therefore, this paper proposes a revolutionary DNN-based MIMO signal detection scheme using the Transformer-based architecture. This novel scheme leverages the multi-head self-attention mechanism inherent in Transformer architectures, which enables the model to capture both spatial and temporal dependencies in MIMO channels, thereby improving symbol detection accuracy and robustness under varying channel conditions. The proposed scheme's bit error rate (BER) performance is compared with traditional methods through simulations. The results show that the proposed method achieves a signal-to-noise ratio (SNR) gain of nearly 1.5 ​dB against the traditional detection methods, with the optimal maximum likelihood detector (MLD) only outperforming it ​by ​&lt; ​0.5 ​dB.</div></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"3 6","pages":"Pages 526-546"},"PeriodicalIF":0.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Information and Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1