首页 > 最新文献

arXiv - CS - Networking and Internet Architecture最新文献

英文 中文
User-centric Service Provision for Edge-assisted Mobile AR: A Digital Twin-based Approach 为边缘辅助移动 AR 提供以用户为中心的服务:基于数字双胞胎的方法
Pub Date : 2024-08-31 DOI: arxiv-2409.00324
Conghao Zhou, Jie Gao, Yixiang Liu, Shisheng Hu, Nan Cheng, Xuemin Shen
Future 6G networks are envisioned to support mobile augmented reality (MAR)applications and provide customized immersive experiences for users viaadvanced service provision. In this paper, we investigate user-centric serviceprovision for edge-assisted MAR to support the timely camera frame uploading ofan MAR device by optimizing the spectrum resource reservation. To address thechallenge of non-stationary data traffic due to uncertain user movement and thecomplex camera frame uploading mechanism, we develop a digital twin (DT)-baseddata-driven approach to user-centric service provision. Specifically, we firstestablish a hierarchical data model with well-defined data attributes tocharacterize the impact of the camera frame uploading mechanism on theuser-specific data traffic. We then design an easy-to-use algorithm to adaptthe data attributes used in traffic modeling to the non-stationary datatraffic. We also derive a closed-form service provision solution tailored todata-driven traffic modeling with the consideration of potential modelinginaccuracies. Trace-driven simulation results demonstrate that our DT-basedapproach for user-centric service provision outperforms conventional approachesin terms of adaptivity and robustness.
未来的 6G 网络将支持移动增强现实(MAR)应用,并通过高级服务供应为用户提供定制的沉浸式体验。在本文中,我们研究了针对边缘辅助增强现实的以用户为中心的服务提供,通过优化频谱资源预留来支持增强现实设备的相机帧及时上传。为了解决由于不确定的用户移动和复杂的相机帧上传机制而导致的非稳定数据流量的挑战,我们开发了一种基于数字孪生(DT)的数据驱动方法来提供以用户为中心的服务。具体来说,我们首先建立了一个具有明确数据属性的分层数据模型,以描述相机帧上传机制对用户特定数据流量的影响。然后,我们设计了一种易于使用的算法,使流量建模中使用的数据属性适应非稳态数据流量。考虑到潜在的建模误差,我们还推导出了一个闭式服务提供解决方案,该方案是为数据驱动的流量建模量身定制的。轨迹驱动的仿真结果表明,我们基于 DT 的以用户为中心的服务提供方法在适应性和鲁棒性方面优于传统方法。
{"title":"User-centric Service Provision for Edge-assisted Mobile AR: A Digital Twin-based Approach","authors":"Conghao Zhou, Jie Gao, Yixiang Liu, Shisheng Hu, Nan Cheng, Xuemin Shen","doi":"arxiv-2409.00324","DOIUrl":"https://doi.org/arxiv-2409.00324","url":null,"abstract":"Future 6G networks are envisioned to support mobile augmented reality (MAR)\u0000applications and provide customized immersive experiences for users via\u0000advanced service provision. In this paper, we investigate user-centric service\u0000provision for edge-assisted MAR to support the timely camera frame uploading of\u0000an MAR device by optimizing the spectrum resource reservation. To address the\u0000challenge of non-stationary data traffic due to uncertain user movement and the\u0000complex camera frame uploading mechanism, we develop a digital twin (DT)-based\u0000data-driven approach to user-centric service provision. Specifically, we first\u0000establish a hierarchical data model with well-defined data attributes to\u0000characterize the impact of the camera frame uploading mechanism on the\u0000user-specific data traffic. We then design an easy-to-use algorithm to adapt\u0000the data attributes used in traffic modeling to the non-stationary data\u0000traffic. We also derive a closed-form service provision solution tailored to\u0000data-driven traffic modeling with the consideration of potential modeling\u0000inaccuracies. Trace-driven simulation results demonstrate that our DT-based\u0000approach for user-centric service provision outperforms conventional approaches\u0000in terms of adaptivity and robustness.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient Functional Split in Non-terrestrial Open Radio Access Networks 非地面开放式无线接入网络中的节能功能分区
Pub Date : 2024-08-31 DOI: arxiv-2409.00466
S. M. Mahdi Shahabi, Xiaonan Deng, Ahmad Qidan, Taisir Elgorashi, Jaafar Elmirghani
This paper investigates the integration of Open Radio Access Network (O-RAN)within non-terrestrial networks (NTN), and optimizing the dynamic functionalsplit between Centralized Units (CU) and Distributed Units (DU) for enhancedenergy efficiency in the network. We introduce a novel framework utilizing aDeep Q-Network (DQN)-based reinforcement learning approach to dynamically findthe optimal RAN functional split option and the best NTN-based RAN network outof the available NTN-platforms according to real-time conditions, trafficdemands, and limited energy resources in NTN platforms. This approach supportscapability of adapting to various NTN-based RANs across different platformssuch as LEO satellites and high-altitude platform stations (HAPS), enablingadaptive network reconfiguration to ensure optimal service quality and energyutilization. Simulation results validate the effectiveness of our method,offering significant improvements in energy efficiency and sustainability underdiverse NTN scenarios.
本文研究了在非地面网络(NTN)中集成开放无线接入网(O-RAN),并优化集中式单元(CU)和分布式单元(DU)之间的动态功能划分,以提高网络能效。我们引入了一个新颖的框架,利用基于深度 Q 网络(DQN)的强化学习方法,根据实时条件、流量需求和 NTN 平台有限的能源资源,从可用的 NTN 平台中动态找到最佳的 RAN 功能划分选项和基于 NTN 的最佳 RAN 网络。这种方法支持在低地轨道卫星和高空平台站(HAPS)等不同平台上适应各种基于NTN的RAN,实现自适应网络重新配置,以确保最佳服务质量和能源利用。仿真结果验证了我们的方法的有效性,在各种 NTN 场景下显著提高了能效和可持续性。
{"title":"Energy-efficient Functional Split in Non-terrestrial Open Radio Access Networks","authors":"S. M. Mahdi Shahabi, Xiaonan Deng, Ahmad Qidan, Taisir Elgorashi, Jaafar Elmirghani","doi":"arxiv-2409.00466","DOIUrl":"https://doi.org/arxiv-2409.00466","url":null,"abstract":"This paper investigates the integration of Open Radio Access Network (O-RAN)\u0000within non-terrestrial networks (NTN), and optimizing the dynamic functional\u0000split between Centralized Units (CU) and Distributed Units (DU) for enhanced\u0000energy efficiency in the network. We introduce a novel framework utilizing a\u0000Deep Q-Network (DQN)-based reinforcement learning approach to dynamically find\u0000the optimal RAN functional split option and the best NTN-based RAN network out\u0000of the available NTN-platforms according to real-time conditions, traffic\u0000demands, and limited energy resources in NTN platforms. This approach supports\u0000capability of adapting to various NTN-based RANs across different platforms\u0000such as LEO satellites and high-altitude platform stations (HAPS), enabling\u0000adaptive network reconfiguration to ensure optimal service quality and energy\u0000utilization. Simulation results validate the effectiveness of our method,\u0000offering significant improvements in energy efficiency and sustainability under\u0000diverse NTN scenarios.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time varying channel estimation for RIS assisted network with outdated CSI: Looking beyond coherence time 具有过时 CSI 的 RIS 辅助网络的时变信道估计:超越相干时间的视角
Pub Date : 2024-08-30 DOI: arxiv-2408.17128
Souvik Deb, Sasthi C. Ghosh
The channel estimation (CE) overhead for unstructured multipath-rich channelsincreases linearly with the number of reflective elements of reconfigurableintelligent surface (RIS). This results in a significant portion of the channelcoherence time being spent on CE, reducing data communication time.Furthermore, due to the mobility of the user equipment (UE) and the timeconsumed during CE, the estimated channel state information (CSI) may becomeoutdated during actual data communication. In recent studies, the timing for CEhas been primarily determined based on the coherence time interval, which isdependent on the velocity of the UE. However, the effect of the current channelcondition and pathloss of the UEs can also be utilized to control the durationbetween successive CE to reduce the overhead while still maintaining thequality of service. Furthermore, for muti-user systems, the appropriatecoherence time intervals of different users may be different depending on theirvelocities. Therefore CE carried out ignoring the difference in coherence timeof different UEs may result in the estimated CSI being detrimentally outdatedfor some users. In contrast, others may not have sufficient time for datacommunication. To this end, based on the throughput analysis on outdated CSI,an algorithm has been designed to dynamically predict the next time instant forCE after the current CSI acquisition. In the first step, optimal RIS phaseshifts to maximise channel gain is computed. Based on this and the amount ofdegradation of SINR due to outdated CSI, transmit powers are allocated for theUEs and finally the next time instant for CE is predicted such that theaggregated throughput is maximized. Simulation results confirm that ourproposed algorithm outperforms the coherence time-based strategies.
非结构化多径信道的信道估计(CE)开销与可重构智能表面(RIS)的反射元素数量呈线性增长。此外,由于用户设备(UE)的移动性和 CE 期间消耗的时间,估计的信道状态信息(CSI)在实际数据通信期间可能会过时。在最近的研究中,CE 的定时主要是根据相干时间间隔确定的,而相干时间间隔取决于 UE 的速度。不过,也可以利用 UE 当前信道条件和路径损耗的影响来控制连续 CE 之间的持续时间,从而在保持服务质量的同时减少开销。此外,对于多用户系统,不同用户的适当相干时间间隔可能因其位置而异。因此,忽略不同 UE 相干时间差异的 CE 可能会导致估计的 CSI 对某些用户不利地过时。相反,其他用户可能没有足够的时间进行数据通信。为此,基于对过时 CSI 的吞吐量分析,我们设计了一种算法来动态预测当前 CSI 获取后的下一个CE 时间瞬时。第一步,计算最佳 RIS 相移,以实现信道增益最大化。在此基础上,再根据过时 CSI 导致的 SINR 下降量,为 UE 分配发射功率,最后预测 CE 的下一个时间点,从而最大限度地提高综合吞吐量。仿真结果证实,我们提出的算法优于基于相干时间的策略。
{"title":"Time varying channel estimation for RIS assisted network with outdated CSI: Looking beyond coherence time","authors":"Souvik Deb, Sasthi C. Ghosh","doi":"arxiv-2408.17128","DOIUrl":"https://doi.org/arxiv-2408.17128","url":null,"abstract":"The channel estimation (CE) overhead for unstructured multipath-rich channels\u0000increases linearly with the number of reflective elements of reconfigurable\u0000intelligent surface (RIS). This results in a significant portion of the channel\u0000coherence time being spent on CE, reducing data communication time.\u0000Furthermore, due to the mobility of the user equipment (UE) and the time\u0000consumed during CE, the estimated channel state information (CSI) may become\u0000outdated during actual data communication. In recent studies, the timing for CE\u0000has been primarily determined based on the coherence time interval, which is\u0000dependent on the velocity of the UE. However, the effect of the current channel\u0000condition and pathloss of the UEs can also be utilized to control the duration\u0000between successive CE to reduce the overhead while still maintaining the\u0000quality of service. Furthermore, for muti-user systems, the appropriate\u0000coherence time intervals of different users may be different depending on their\u0000velocities. Therefore CE carried out ignoring the difference in coherence time\u0000of different UEs may result in the estimated CSI being detrimentally outdated\u0000for some users. In contrast, others may not have sufficient time for data\u0000communication. To this end, based on the throughput analysis on outdated CSI,\u0000an algorithm has been designed to dynamically predict the next time instant for\u0000CE after the current CSI acquisition. In the first step, optimal RIS phase\u0000shifts to maximise channel gain is computed. Based on this and the amount of\u0000degradation of SINR due to outdated CSI, transmit powers are allocated for the\u0000UEs and finally the next time instant for CE is predicted such that the\u0000aggregated throughput is maximized. Simulation results confirm that our\u0000proposed algorithm outperforms the coherence time-based strategies.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prioritized Information Bottleneck Theoretic Framework with Distributed Online Learning for Edge Video Analytics 针对边缘视频分析的分布式在线学习优先信息瓶颈理论框架
Pub Date : 2024-08-30 DOI: arxiv-2409.00146
Zhengru Fang, Senkang Hu, Jingjing Wang, Yiqin Deng, Xianhao Chen, Yuguang Fang
Collaborative perception systems leverage multiple edge devices, suchsurveillance cameras or autonomous cars, to enhance sensing quality andeliminate blind spots. Despite their advantages, challenges such as limitedchannel capacity and data redundancy impede their effectiveness. To addressthese issues, we introduce the Prioritized Information Bottleneck (PIB)framework for edge video analytics. This framework prioritizes the shared databased on the signal-to-noise ratio (SNR) and camera coverage of the region ofinterest (RoI), reducing spatial-temporal data redundancy to transmit onlyessential information. This strategy avoids the need for video reconstructionat edge servers and maintains low latency. It leverages a deterministicinformation bottleneck method to extract compact, relevant features, balancinginformativeness and communication costs. For high-dimensional data, we applyvariational approximations for practical optimization. To reduce communicationcosts in fluctuating connections, we propose a gate mechanism based ondistributed online learning (DOL) to filter out less informative messages andefficiently select edge servers. Moreover, we establish the asymptoticoptimality of DOL by proving the sublinearity of their regrets. Compared tofive coding methods for image and video compression, PIB improves mean objectdetection accuracy (MODA) while reducing 17.8% and reduces communication costsby 82.80% under poor channel conditions.
协作感知系统利用多个边缘设备(如监控摄像头或自动驾驶汽车)来提高感知质量并消除盲点。尽管协同感知系统有其优势,但有限的信道容量和数据冗余等挑战阻碍了其有效性。为了解决这些问题,我们为边缘视频分析引入了优先信息瓶颈(PIB)框架。该框架根据感兴趣区域(RoI)的信噪比(SNR)和摄像机覆盖范围确定共享数据库的优先级,减少时空数据冗余,只传输必要信息。这一策略避免了在边缘服务器上进行视频重构,并保持了较低的延迟。它利用确定性信息瓶颈法提取紧凑的相关特征,平衡了信息量和通信成本。对于高维数据,我们采用变量近似法进行实际优化。为了降低波动连接中的通信成本,我们提出了一种基于分布式在线学习(DOL)的门机制,以过滤掉信息量较少的信息,并有效地选择边缘服务器。此外,我们还通过证明其遗憾的亚线性,建立了 DOL 的渐进最优性。与用于图像和视频压缩的五种编码方法相比,PIB提高了平均物体检测精度(MODA),同时降低了17.8%,并在信道条件差的情况下降低了82.80%的通信成本。
{"title":"Prioritized Information Bottleneck Theoretic Framework with Distributed Online Learning for Edge Video Analytics","authors":"Zhengru Fang, Senkang Hu, Jingjing Wang, Yiqin Deng, Xianhao Chen, Yuguang Fang","doi":"arxiv-2409.00146","DOIUrl":"https://doi.org/arxiv-2409.00146","url":null,"abstract":"Collaborative perception systems leverage multiple edge devices, such\u0000surveillance cameras or autonomous cars, to enhance sensing quality and\u0000eliminate blind spots. Despite their advantages, challenges such as limited\u0000channel capacity and data redundancy impede their effectiveness. To address\u0000these issues, we introduce the Prioritized Information Bottleneck (PIB)\u0000framework for edge video analytics. This framework prioritizes the shared data\u0000based on the signal-to-noise ratio (SNR) and camera coverage of the region of\u0000interest (RoI), reducing spatial-temporal data redundancy to transmit only\u0000essential information. This strategy avoids the need for video reconstruction\u0000at edge servers and maintains low latency. It leverages a deterministic\u0000information bottleneck method to extract compact, relevant features, balancing\u0000informativeness and communication costs. For high-dimensional data, we apply\u0000variational approximations for practical optimization. To reduce communication\u0000costs in fluctuating connections, we propose a gate mechanism based on\u0000distributed online learning (DOL) to filter out less informative messages and\u0000efficiently select edge servers. Moreover, we establish the asymptotic\u0000optimality of DOL by proving the sublinearity of their regrets. Compared to\u0000five coding methods for image and video compression, PIB improves mean object\u0000detection accuracy (MODA) while reducing 17.8% and reduces communication costs\u0000by 82.80% under poor channel conditions.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deadline and Priority Constrained Immersive Video Streaming Transmission Scheduling 有期限和优先级限制的沉浸式视频流传输调度
Pub Date : 2024-08-30 DOI: arxiv-2408.17028
Tongtong Feng, Qi Qi, Bo He, Jingyu Wang
Deadline-aware transmission scheduling in immersive video streaming iscrucial. The objective is to guarantee that at least a certain block inmulti-links is fully delivered within their deadlines, which is referred to asdelivery ratio. Compared with existing models that focus on maximizingthroughput and ultra-low latency, which makes bandwidth resource allocation anduser satisfaction locally optimized, immersive video streaming needs toguarantee more high-priority block delivery within personalized deadlines. Inthis paper, we propose a deadline and priority-constrained immersive videostreaming transmission scheduling scheme. It builds an accurate bandwidthprediction model that can sensitively assist scheduling decisions. It dividesvideo streaming into various media elements and performs scheduling based onthe user's personalized latency sensitivity thresholds and the media element'spriority. We evaluate our scheme via trace-driven simulations. Compared withexisting models, the results further demonstrate the superiority of our schemewith 12{%}-31{%} gains in quality of experience (QoE).
在身临其境的视频流中,对截止日期有感知的传输调度至关重要。其目标是保证多链路中至少有某一区块在截止日期前完全交付,即交付率。现有模型注重最大化吞吐量和超低延迟,这使得带宽资源分配和用户满意度局部最优化,相比之下,身临其境视频流需要保证更多高优先级区块在个性化期限内交付。本文提出了一种有期限和优先级限制的沉浸式视频流传输调度方案。它建立了一个精确的带宽预测模型,可以灵敏地辅助调度决策。它将视频流分为各种媒体元素,并根据用户的个性化延迟敏感度阈值和媒体元素的优先级执行调度。我们通过跟踪仿真评估了我们的方案。与现有模型相比,结果进一步证明了我们方案的优越性,体验质量(QoE)提高了 12{/%}-31{/%}。
{"title":"Deadline and Priority Constrained Immersive Video Streaming Transmission Scheduling","authors":"Tongtong Feng, Qi Qi, Bo He, Jingyu Wang","doi":"arxiv-2408.17028","DOIUrl":"https://doi.org/arxiv-2408.17028","url":null,"abstract":"Deadline-aware transmission scheduling in immersive video streaming is\u0000crucial. The objective is to guarantee that at least a certain block in\u0000multi-links is fully delivered within their deadlines, which is referred to as\u0000delivery ratio. Compared with existing models that focus on maximizing\u0000throughput and ultra-low latency, which makes bandwidth resource allocation and\u0000user satisfaction locally optimized, immersive video streaming needs to\u0000guarantee more high-priority block delivery within personalized deadlines. In\u0000this paper, we propose a deadline and priority-constrained immersive video\u0000streaming transmission scheduling scheme. It builds an accurate bandwidth\u0000prediction model that can sensitively assist scheduling decisions. It divides\u0000video streaming into various media elements and performs scheduling based on\u0000the user's personalized latency sensitivity thresholds and the media element's\u0000priority. We evaluate our scheme via trace-driven simulations. Compared with\u0000existing models, the results further demonstrate the superiority of our scheme\u0000with 12{%}-31{%} gains in quality of experience (QoE).","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PIB: Prioritized Information Bottleneck Framework for Collaborative Edge Video Analytics PIB:用于协作式边缘视频分析的优先信息瓶颈框架
Pub Date : 2024-08-30 DOI: arxiv-2408.17047
Zhengru Fang, Senkang Hu, Liyan Yang, Yiqin Deng, Xianhao Chen, Yuguang Fang
Collaborative edge sensing systems, particularly in collaborative perceptionsystems in autonomous driving, can significantly enhance tracking accuracy andreduce blind spots with multi-view sensing capabilities. However, their limitedchannel capacity and the redundancy in sensory data pose significantchallenges, affecting the performance of collaborative inference tasks. Totackle these issues, we introduce a Prioritized Information Bottleneck (PIB)framework for collaborative edge video analytics. We first propose apriority-based inference mechanism that jointly considers the signal-to-noiseratio (SNR) and the camera's coverage area of the region of interest (RoI). Toenable efficient inference, PIB reduces video redundancy in both spatial andtemporal domains and transmits only the essential information for thedownstream inference tasks. This eliminates the need to reconstruct videos onthe edge server while maintaining low latency. Specifically, it derivescompact, task-relevant features by employing the deterministic informationbottleneck (IB) method, which strikes a balance between feature informativenessand communication costs. Given the computational challenges caused by IB-basedobjectives with high-dimensional data, we resort to variational approximationsfor feasible optimization. Compared to TOCOM-TEM, JPEG, and HEVC, PIB achievesan improvement of up to 15.1% in mean object detection accuracy (MODA) andreduces communication costs by 66.7% when edge cameras experience poor channelconditions.
协作边缘传感系统,尤其是自动驾驶中的协作感知系统,可以通过多视角传感功能显著提高跟踪精度并减少盲点。然而,其有限的信道容量和感知数据的冗余性带来了巨大挑战,影响了协作推理任务的性能。为了解决这些问题,我们为协作式边缘视频分析引入了优先级信息瓶颈(PIB)框架。我们首先提出了基于优先级的推理机制,该机制综合考虑了信噪比(SNR)和摄像机对感兴趣区域(RoI)的覆盖范围。为了实现高效推理,PIB 减少了空间和时间域的视频冗余,只传输下游推理任务所需的基本信息。这样就无需在边缘服务器上重建视频,同时保持低延迟。具体来说,它采用确定性信息瓶颈(IB)方法,在特征信息量和通信成本之间取得平衡,从而获得紧凑的任务相关特征。考虑到基于 IB 的高维数据目标所带来的计算挑战,我们采用了变分近似法进行可行性优化。与 TOCOM-TEM、JPEG 和 HEVC 相比,PIB 在平均物体检测准确率(MODA)方面实现了高达 15.1% 的改进,并且在边缘相机遭遇不良信道条件时,通信成本降低了 66.7%。
{"title":"PIB: Prioritized Information Bottleneck Framework for Collaborative Edge Video Analytics","authors":"Zhengru Fang, Senkang Hu, Liyan Yang, Yiqin Deng, Xianhao Chen, Yuguang Fang","doi":"arxiv-2408.17047","DOIUrl":"https://doi.org/arxiv-2408.17047","url":null,"abstract":"Collaborative edge sensing systems, particularly in collaborative perception\u0000systems in autonomous driving, can significantly enhance tracking accuracy and\u0000reduce blind spots with multi-view sensing capabilities. However, their limited\u0000channel capacity and the redundancy in sensory data pose significant\u0000challenges, affecting the performance of collaborative inference tasks. To\u0000tackle these issues, we introduce a Prioritized Information Bottleneck (PIB)\u0000framework for collaborative edge video analytics. We first propose a\u0000priority-based inference mechanism that jointly considers the signal-to-noise\u0000ratio (SNR) and the camera's coverage area of the region of interest (RoI). To\u0000enable efficient inference, PIB reduces video redundancy in both spatial and\u0000temporal domains and transmits only the essential information for the\u0000downstream inference tasks. This eliminates the need to reconstruct videos on\u0000the edge server while maintaining low latency. Specifically, it derives\u0000compact, task-relevant features by employing the deterministic information\u0000bottleneck (IB) method, which strikes a balance between feature informativeness\u0000and communication costs. Given the computational challenges caused by IB-based\u0000objectives with high-dimensional data, we resort to variational approximations\u0000for feasible optimization. Compared to TOCOM-TEM, JPEG, and HEVC, PIB achieves\u0000an improvement of up to 15.1% in mean object detection accuracy (MODA) and\u0000reduces communication costs by 66.7% when edge cameras experience poor channel\u0000conditions.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reasoning AI Performance Degradation in 6G Networks with Large Language Models 利用大型语言模型推理 6G 网络中的人工智能性能下降问题
Pub Date : 2024-08-30 DOI: arxiv-2408.17097
Liming Huang, Yulei Wu, Dimitra Simeonidou
The integration of Artificial Intelligence (AI) within 6G networks is poisedto revolutionize connectivity, reliability, and intelligent decision-making.However, the performance of AI models in these networks is crucial, as anydecline can significantly impact network efficiency and the services itsupports. Understanding the root causes of performance degradation is essentialfor maintaining optimal network functionality. In this paper, we propose anovel approach to reason about AI model performance degradation in 6G networksusing the Large Language Models (LLMs) empowered Chain-of-Thought (CoT) method.Our approach employs an LLM as a ''teacher'' model through zero-shot promptingto generate teaching CoT rationales, followed by a CoT ''student'' model thatis fine-tuned by the generated teaching data for learning to reason aboutperformance declines. The efficacy of this model is evaluated in a real-worldscenario involving a real-time 3D rendering task with multi-Access Technologies(mATs) including WiFi, 5G, and LiFi for data transmission. Experimental resultsshow that our approach achieves over 97% reasoning accuracy on the built testquestions, confirming the validity of our collected dataset and theeffectiveness of the LLM-CoT method. Our findings highlight the potential ofLLMs in enhancing the reliability and efficiency of 6G networks, representing asignificant advancement in the evolution of AI-native network infrastructures.
人工智能(AI)在 6G 网络中的集成有望彻底改变网络的连接性、可靠性和智能决策。然而,人工智能模型在这些网络中的性能至关重要,因为任何性能下降都会严重影响网络效率及其支持的服务。了解性能下降的根本原因对于保持最佳网络功能至关重要。我们的方法采用大型语言模型(LLM)作为 "教师 "模型,通过零点提示生成教学 CoT 原理,然后由 CoT "学生 "模型根据生成的教学数据进行微调,以学习推理性能下降。该模型的功效在一个真实世界场景中进行了评估,该场景涉及使用多种接入技术(mAT)(包括用于数据传输的 WiFi、5G 和 LiFi)的实时 3D 渲染任务。实验结果表明,我们的方法在构建的测试问题上达到了 97% 以上的推理准确率,证实了我们收集的数据集的有效性和 LLM-CoT 方法的有效性。我们的研究结果凸显了 LLM 在提高 6G 网络可靠性和效率方面的潜力,是人工智能原生网络基础设施演进过程中的一大进步。
{"title":"Reasoning AI Performance Degradation in 6G Networks with Large Language Models","authors":"Liming Huang, Yulei Wu, Dimitra Simeonidou","doi":"arxiv-2408.17097","DOIUrl":"https://doi.org/arxiv-2408.17097","url":null,"abstract":"The integration of Artificial Intelligence (AI) within 6G networks is poised\u0000to revolutionize connectivity, reliability, and intelligent decision-making.\u0000However, the performance of AI models in these networks is crucial, as any\u0000decline can significantly impact network efficiency and the services it\u0000supports. Understanding the root causes of performance degradation is essential\u0000for maintaining optimal network functionality. In this paper, we propose a\u0000novel approach to reason about AI model performance degradation in 6G networks\u0000using the Large Language Models (LLMs) empowered Chain-of-Thought (CoT) method.\u0000Our approach employs an LLM as a ''teacher'' model through zero-shot prompting\u0000to generate teaching CoT rationales, followed by a CoT ''student'' model that\u0000is fine-tuned by the generated teaching data for learning to reason about\u0000performance declines. The efficacy of this model is evaluated in a real-world\u0000scenario involving a real-time 3D rendering task with multi-Access Technologies\u0000(mATs) including WiFi, 5G, and LiFi for data transmission. Experimental results\u0000show that our approach achieves over 97% reasoning accuracy on the built test\u0000questions, confirming the validity of our collected dataset and the\u0000effectiveness of the LLM-CoT method. Our findings highlight the potential of\u0000LLMs in enhancing the reliability and efficiency of 6G networks, representing a\u0000significant advancement in the evolution of AI-native network infrastructures.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Next-Generation Wi-Fi Networks with Generative AI: Design and Insights 采用生成式人工智能的下一代 Wi-Fi 网络:设计与见解
Pub Date : 2024-08-09 DOI: arxiv-2408.04835
Jingyu Wang, Xuming Fang, Dusit Niyato, Tie Liu
Generative artificial intelligence (GAI), known for its powerful capabilitiesin image and text processing, also holds significant promise for the design andperformance enhancement of future wireless networks. In this article, weexplore the transformative potential of GAI in next-generation Wi-Fi networks,exploiting its advanced capabilities to address key challenges and improveoverall network performance. We begin by reviewing the development of majorWi-Fi generations and illustrating the challenges that future Wi-Fi networksmay encounter. We then introduce typical GAI models and detail their potentialcapabilities in Wi-Fi network optimization, performance enhancement, and otherapplications. Furthermore, we present a case study wherein we propose aretrieval-augmented LLM (RA-LLM)-enabled Wi-Fi design framework that aids inproblem formulation, which is subsequently solved using a generative diffusionmodel (GDM)-based deep reinforcement learning (DRL) framework to optimizevarious network parameters. Numerical results demonstrate the effectiveness ofour proposed algorithm in high-density deployment scenarios. Finally, weprovide some potential future research directions for GAI-assisted Wi-Finetworks.
生成式人工智能(GAI)以其在图像和文本处理方面的强大功能而著称,在未来无线网络的设计和性能提升方面也大有可为。在本文中,我们将探讨 GAI 在下一代 Wi-Fi 网络中的变革潜力,利用其先进功能应对关键挑战并提高整体网络性能。我们首先回顾了主要几代 Wi-Fi 的发展历程,并说明了未来 Wi-Fi 网络可能遇到的挑战。然后,我们介绍了典型的 GAI 模型,并详细介绍了它们在 Wi-Fi 网络优化、性能提升和其他应用中的潜在能力。此外,我们还介绍了一个案例研究,在这个案例研究中,我们提出了一个支持检索增强 LLM(RA-LLM)的 Wi-Fi 设计框架,该框架有助于问题的提出,随后使用基于生成扩散模型(GDM)的深度强化学习(DRL)框架来优化各种网络参数。数值结果证明了我们提出的算法在高密度部署场景中的有效性。最后,我们为 GAI 辅助的 Wi-Finetworks 提供了一些潜在的未来研究方向。
{"title":"Next-Generation Wi-Fi Networks with Generative AI: Design and Insights","authors":"Jingyu Wang, Xuming Fang, Dusit Niyato, Tie Liu","doi":"arxiv-2408.04835","DOIUrl":"https://doi.org/arxiv-2408.04835","url":null,"abstract":"Generative artificial intelligence (GAI), known for its powerful capabilities\u0000in image and text processing, also holds significant promise for the design and\u0000performance enhancement of future wireless networks. In this article, we\u0000explore the transformative potential of GAI in next-generation Wi-Fi networks,\u0000exploiting its advanced capabilities to address key challenges and improve\u0000overall network performance. We begin by reviewing the development of major\u0000Wi-Fi generations and illustrating the challenges that future Wi-Fi networks\u0000may encounter. We then introduce typical GAI models and detail their potential\u0000capabilities in Wi-Fi network optimization, performance enhancement, and other\u0000applications. Furthermore, we present a case study wherein we propose a\u0000retrieval-augmented LLM (RA-LLM)-enabled Wi-Fi design framework that aids in\u0000problem formulation, which is subsequently solved using a generative diffusion\u0000model (GDM)-based deep reinforcement learning (DRL) framework to optimize\u0000various network parameters. Numerical results demonstrate the effectiveness of\u0000our proposed algorithm in high-density deployment scenarios. Finally, we\u0000provide some potential future research directions for GAI-assisted Wi-Fi\u0000networks.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy performance of LR-FHSS: analysis and evaluation LR-FHSS 的能源性能:分析与评估
Pub Date : 2024-08-09 DOI: arxiv-2408.04908
Roger Sanchez-Vital, Lluís Casals, Bartomeu Heer-Salva, Rafael Vidal, Carles Gomez, Eduard Garcia-Villegas
Long Range-Frequency Hopping Spread Spectrum (LR-FHSS) is a pivotaladvancement in the LoRaWAN protocol, designed to enhance the network's capacityand robustness, particularly in densely populated environments. Although energyconsumption is paramount in LoRaWAN-based end-devices, there are currently nostudies in the literature, to our knowledge, that model the impact of thisnovel mechanism on energy consumption. In this article, we provide acomprehensive energy consumption analytical model of LR-FHSS, focusing on threecritical metrics: average current consumption, battery lifetime, and energyefficiency of data transmission. The model is based on measurements performedon real hardware in a fully operational LR-FHSS network. While in ourevaluation, LR-FHSS can show worse consumption figures than LoRa, we found thatwith optimal configuration, the battery lifetime of LR-FHSS end-devices canreach 2.5 years for a 50-minute notification period. For the mostenergy-efficient payload size, this lifespan can be extended to a theoreticalmaximum of up to 16 years with a one-day notification interval using acell-coin battery.
长距离跳频扩频(LR-FHSS)是 LoRaWAN 协议中的一项重要改进,旨在增强网络的容量和鲁棒性,尤其是在人口稠密的环境中。虽然能耗在基于 LoRaWAN 的终端设备中至关重要,但据我们所知,目前还没有文献研究模拟这种新颖机制对能耗的影响。在本文中,我们提供了 LR-FHSS 的综合能耗分析模型,重点关注三个关键指标:平均电流消耗、电池寿命和数据传输能效。该模型基于在全面运行的 LR-FHSS 网络中对真实硬件进行的测量。虽然在我们的评估中,LR-FHSS 的消耗数据可能比 LoRa 差,但我们发现,在最佳配置下,LR-FHSS 终端设备的电池寿命在 50 分钟通知周期内可达到 2.5 年。对于能效最高的有效载荷大小,使用acell-coin电池,在一天的通知间隔内,这一寿命可延长到理论上的最长 16 年。
{"title":"Energy performance of LR-FHSS: analysis and evaluation","authors":"Roger Sanchez-Vital, Lluís Casals, Bartomeu Heer-Salva, Rafael Vidal, Carles Gomez, Eduard Garcia-Villegas","doi":"arxiv-2408.04908","DOIUrl":"https://doi.org/arxiv-2408.04908","url":null,"abstract":"Long Range-Frequency Hopping Spread Spectrum (LR-FHSS) is a pivotal\u0000advancement in the LoRaWAN protocol, designed to enhance the network's capacity\u0000and robustness, particularly in densely populated environments. Although energy\u0000consumption is paramount in LoRaWAN-based end-devices, there are currently no\u0000studies in the literature, to our knowledge, that model the impact of this\u0000novel mechanism on energy consumption. In this article, we provide a\u0000comprehensive energy consumption analytical model of LR-FHSS, focusing on three\u0000critical metrics: average current consumption, battery lifetime, and energy\u0000efficiency of data transmission. The model is based on measurements performed\u0000on real hardware in a fully operational LR-FHSS network. While in our\u0000evaluation, LR-FHSS can show worse consumption figures than LoRa, we found that\u0000with optimal configuration, the battery lifetime of LR-FHSS end-devices can\u0000reach 2.5 years for a 50-minute notification period. For the most\u0000energy-efficient payload size, this lifespan can be extended to a theoretical\u0000maximum of up to 16 years with a one-day notification interval using a\u0000cell-coin battery.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks 带宽受限网络中基于重叠的分散式联合学习
Pub Date : 2024-08-08 DOI: arxiv-2408.04705
Yudi Huang, Tingyang Sun, Ting He
The emerging machine learning paradigm of decentralized federated learning(DFL) has the promise of greatly boosting the deployment of artificialintelligence (AI) by directly learning across distributed agents withoutcentralized coordination. Despite significant efforts on improving thecommunication efficiency of DFL, most existing solutions were based on thesimplistic assumption that neighboring agents are physically adjacent in theunderlying communication network, which fails to correctly capture thecommunication cost when learning over a general bandwidth-limited network, asencountered in many edge networks. In this work, we address this gap byleveraging recent advances in network tomography to jointly design thecommunication demands and the communication schedule for overlay-based DFL inbandwidth-limited networks without requiring explicit cooperation from theunderlying network. By carefully analyzing the structure of our problem, wedecompose it into a series of optimization problems that can each be solvedefficiently, to collectively minimize the total training time. Extensivedata-driven simulations show that our solution can significantly accelerate DFLin comparison with state-of-the-art designs.
去中心化联合学习(DFL)是一种新兴的机器学习范式,通过在分布式代理之间直接学习而无需集中协调,有望极大地促进人工智能(AI)的应用。尽管在提高 DFL 的通信效率方面做出了巨大努力,但现有的大多数解决方案都基于一个简单的假设,即相邻代理在底层通信网络中物理上是相邻的,这就无法正确捕捉在一般带宽受限网络上学习时的通信成本,而在许多边缘网络中都会遇到这种情况。在这项工作中,我们利用网络层析技术的最新进展解决了这一问题,为基于覆盖的 DFL 在带宽受限网络中联合设计了通信需求和通信时间表,而不需要底层网络的明确合作。通过仔细分析我们的问题结构,我们将其分解为一系列优化问题,每个问题都可以高效地求解,从而使总训练时间最小化。广泛的数据驱动仿真表明,与最先进的设计相比,我们的解决方案可以显著加快 DFL 的速度。
{"title":"Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks","authors":"Yudi Huang, Tingyang Sun, Ting He","doi":"arxiv-2408.04705","DOIUrl":"https://doi.org/arxiv-2408.04705","url":null,"abstract":"The emerging machine learning paradigm of decentralized federated learning\u0000(DFL) has the promise of greatly boosting the deployment of artificial\u0000intelligence (AI) by directly learning across distributed agents without\u0000centralized coordination. Despite significant efforts on improving the\u0000communication efficiency of DFL, most existing solutions were based on the\u0000simplistic assumption that neighboring agents are physically adjacent in the\u0000underlying communication network, which fails to correctly capture the\u0000communication cost when learning over a general bandwidth-limited network, as\u0000encountered in many edge networks. In this work, we address this gap by\u0000leveraging recent advances in network tomography to jointly design the\u0000communication demands and the communication schedule for overlay-based DFL in\u0000bandwidth-limited networks without requiring explicit cooperation from the\u0000underlying network. By carefully analyzing the structure of our problem, we\u0000decompose it into a series of optimization problems that can each be solved\u0000efficiently, to collectively minimize the total training time. Extensive\u0000data-driven simulations show that our solution can significantly accelerate DFL\u0000in comparison with state-of-the-art designs.","PeriodicalId":501280,"journal":{"name":"arXiv - CS - Networking and Internet Architecture","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Networking and Internet Architecture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1