首页 > 最新文献

IEEE Transactions on Broadcasting最新文献

英文 中文
Broadcasting and 6G Converged Network Architecture 广播和 6G 融合网络架构
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-28 DOI: 10.1109/TBC.2024.3407482
Haojiang Li;Wenjun Zhang;Yin Xu;Dazhi He;Haoyang Li
With the arrival of the 6G era, wireless communication networks will face increased pressure due to diversified service traffic with ultra-large bandwidth, ultra-low latency, and massive connections, making it difficult to guarantee quality of service. However, broadcasting can realize wide-area coverage with lower physical transmission resource occupancy. Therefore, the convergence of broadcasting and 6G networks can promote the evolution and upgrade of traditional broadcasting services towards flexibility, dynamics, and personalization, and at the same time, can effectively alleviate the data congestion in mobile communication networks. In this paper, we firstly introduce the three typical application scenarios of broadcasting and 6G convergence in the future, and summarize the vital technologies and challenges in constructing the converged network. On this basis, we propose a broadcasting and 6G converged network architecture and a next-generation 6G broadcasting core network architecture, and finally introduce the typical collaboration modes of the converged network.
随着 6G 时代的到来,无线通信网络将面临超大带宽、超低时延、海量连接等多样化业务流量带来的更大压力,服务质量难以保证。然而,广播能以较低的物理传输资源占用率实现广域覆盖。因此,广播与 6G 网络的融合可以促进传统广播业务向灵活、动态、个性化方向演进和升级,同时可以有效缓解移动通信网络的数据拥塞问题。本文首先介绍了未来广播与 6G 融合的三种典型应用场景,并总结了构建融合网络的关键技术和挑战。在此基础上,我们提出了广播与 6G 融合网络架构和下一代 6G 广播核心网架构,最后介绍了融合网络的典型协作模式。
{"title":"Broadcasting and 6G Converged Network Architecture","authors":"Haojiang Li;Wenjun Zhang;Yin Xu;Dazhi He;Haoyang Li","doi":"10.1109/TBC.2024.3407482","DOIUrl":"10.1109/TBC.2024.3407482","url":null,"abstract":"With the arrival of the 6G era, wireless communication networks will face increased pressure due to diversified service traffic with ultra-large bandwidth, ultra-low latency, and massive connections, making it difficult to guarantee quality of service. However, broadcasting can realize wide-area coverage with lower physical transmission resource occupancy. Therefore, the convergence of broadcasting and 6G networks can promote the evolution and upgrade of traditional broadcasting services towards flexibility, dynamics, and personalization, and at the same time, can effectively alleviate the data congestion in mobile communication networks. In this paper, we firstly introduce the three typical application scenarios of broadcasting and 6G convergence in the future, and summarize the vital technologies and challenges in constructing the converged network. On this basis, we propose a broadcasting and 6G converged network architecture and a next-generation 6G broadcasting core network architecture, and finally introduce the typical collaboration modes of the converged network.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"971-979"},"PeriodicalIF":3.2,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Content-Aware Full-Reference Image Quality Assessment Method Using a Gram Matrix and Signal-to-Noise 使用克矩阵和信噪比的内容感知全参考图像质量评估方法
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-28 DOI: 10.1109/tbc.2024.3410707
Shuqi Han, Yueting Huang, Mingliang Zhou, Xuekai Wei, Fan Jia, Xu Zhuang, Fei Cheng, Tao Xiang, Yong Feng, Huayan Pu, Jun Luo
{"title":"A Content-Aware Full-Reference Image Quality Assessment Method Using a Gram Matrix and Signal-to-Noise","authors":"Shuqi Han, Yueting Huang, Mingliang Zhou, Xuekai Wei, Fan Jia, Xu Zhuang, Fei Cheng, Tao Xiang, Yong Feng, Huayan Pu, Jun Luo","doi":"10.1109/tbc.2024.3410707","DOIUrl":"https://doi.org/10.1109/tbc.2024.3410707","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"25 1","pages":""},"PeriodicalIF":4.5,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Securing Content Production Centers in 5G Broadcasting: Strategies and Technologies for Mitigating Cybersecurity Risks 确保 5G 广播中内容制作中心的安全:降低网络安全风险的策略和技术
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-27 DOI: 10.1109/TBC.2024.3407596
Yang Liu;Jie Wang;Ruohan Cao;Yueming Lu;Yaojun Qiao;Yuanqing Xia;Daoqi Han
This paper presents a comprehensive investigation into the crucial aspect of security within 5G broadcasting environments, with a particular focus on content production centers. It dives into the unique challenges and vulnerabilities associated with 5G technology, specifically within the context of broadcasting media. The study provides an up-to-date survey of the current landscape in 5G network security, emphasizing the specific requirements and risks specific to broadcasting. In response to these challenges, we propose a set of robust security strategies and technologies specifically tailored for these environments. Through rigorous simulations and compelling case studies, we demonstrate the efficacy of these strategies within a 5G broadcasting context. Ultimately, this paper aims to offer invaluable insights for broadcasters, policymakers, and technologists, enabling them to enhance the security and integrity of 5G broadcasting networks through informed decision-making and implementation of best practices.
本文全面探讨了 5G 广播环境中的关键安全问题,尤其关注内容制作中心。它深入探讨了与 5G 技术相关的独特挑战和漏洞,特别是在广播媒体的背景下。本研究提供了当前 5G 网络安全状况的最新调查,强调了广播的特定要求和风险。为了应对这些挑战,我们提出了一套专门针对这些环境的强大安全策略和技术。通过严格的模拟和令人信服的案例研究,我们证明了这些策略在 5G 广播环境中的有效性。最终,本文旨在为广播公司、政策制定者和技术专家提供宝贵的见解,使他们能够通过明智的决策和最佳实践的实施来提高 5G 广播网络的安全性和完整性。
{"title":"Securing Content Production Centers in 5G Broadcasting: Strategies and Technologies for Mitigating Cybersecurity Risks","authors":"Yang Liu;Jie Wang;Ruohan Cao;Yueming Lu;Yaojun Qiao;Yuanqing Xia;Daoqi Han","doi":"10.1109/TBC.2024.3407596","DOIUrl":"10.1109/TBC.2024.3407596","url":null,"abstract":"This paper presents a comprehensive investigation into the crucial aspect of security within 5G broadcasting environments, with a particular focus on content production centers. It dives into the unique challenges and vulnerabilities associated with 5G technology, specifically within the context of broadcasting media. The study provides an up-to-date survey of the current landscape in 5G network security, emphasizing the specific requirements and risks specific to broadcasting. In response to these challenges, we propose a set of robust security strategies and technologies specifically tailored for these environments. Through rigorous simulations and compelling case studies, we demonstrate the efficacy of these strategies within a 5G broadcasting context. Ultimately, this paper aims to offer invaluable insights for broadcasters, policymakers, and technologists, enabling them to enhance the security and integrity of 5G broadcasting networks through informed decision-making and implementation of best practices.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"1008-1017"},"PeriodicalIF":3.2,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141528459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Dimensional Attention Fusion Network for Simulated Single Image Super-Resolution 用于模拟单张图像超级分辨率的跨维注意力融合网络
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-25 DOI: 10.1109/TBC.2024.3408643
Jingbo He;Xiaohai He;Shuhua Xiong;Honggang Chen
Single image super-resolution (SISR) is a task of reconstructing high-resolution (HR) images from low-resolution (LR) images, which are obtained by some degradation process. Deep neural networks (DNNs) have greatly advanced the frontier of image super-resolution research and replaced traditional methods as the de facto standard approach. The attention mechanism enables the SR algorithms to achieve breakthrough performance after another. However, limited research has been conducted on the interaction and integration of attention mechanisms across different dimensions. To tackle this issue, in this paper, we propose a cross-dimensional attention fusion network (CAFN) to effectively achieve cross-dimensional inter-action with long-range dependencies. Specifically, the proposed approach involves the utilization of a cross-dimensional aggrega-tion module (CAM) to effectively capture contextual information by integrating both spatial and channel importance maps. The design of information fusion module (IFM) in CAM serves as a bridge for parallel dual-attention information fusion. In addition, a novel memory-adaptive multi-stage (MAMS) training method is proposed. We perform warm-start retraining with the same setting as the previous stage, without increasing memory consumption. If the memory is sufficient, we finetune the model with a larger patch size after the warm-start. The experimental results definitively demonstrate the superior performance of our cross-dimensional attention fusion network and training strategy compared to state-of-the-art (SOTA) methods, as evidenced by both quantitative and qualitative metrics.
单幅图像超分辨率(SISR)是一项从低分辨率(LR)图像重建高分辨率(HR)图像的任务,而低分辨率(LR)图像是通过一定的降解过程获得的。深度神经网络(DNN)极大地推动了图像超分辨率研究的前沿发展,并取代传统方法成为事实上的标准方法。注意力机制使 SR 算法取得了一个又一个突破性的性能。然而,关于注意力机制在不同维度上的交互与融合的研究还很有限。为解决这一问题,我们在本文中提出了一种跨维注意力融合网络(CAFN),以有效实现具有长程依赖性的跨维交互作用。具体来说,所提出的方法包括利用跨维聚合模块(CAM),通过整合空间和通道重要性图来有效捕捉上下文信息。CAM 中信息融合模块(IFM)的设计可作为并行双注意信息融合的桥梁。此外,我们还提出了一种新颖的记忆自适应多阶段(MAMS)训练方法。我们在不增加内存消耗的情况下,以与前一阶段相同的设置执行热启动再训练。如果内存充足,我们会在热启动后使用更大的补丁尺寸对模型进行微调。实验结果从定量和定性指标两方面明确证明,与最先进的(SOTA)方法相比,我们的跨维注意力融合网络和训练策略具有更优越的性能。
{"title":"Cross-Dimensional Attention Fusion Network for Simulated Single Image Super-Resolution","authors":"Jingbo He;Xiaohai He;Shuhua Xiong;Honggang Chen","doi":"10.1109/TBC.2024.3408643","DOIUrl":"10.1109/TBC.2024.3408643","url":null,"abstract":"Single image super-resolution (SISR) is a task of reconstructing high-resolution (HR) images from low-resolution (LR) images, which are obtained by some degradation process. Deep neural networks (DNNs) have greatly advanced the frontier of image super-resolution research and replaced traditional methods as the de facto standard approach. The attention mechanism enables the SR algorithms to achieve breakthrough performance after another. However, limited research has been conducted on the interaction and integration of attention mechanisms across different dimensions. To tackle this issue, in this paper, we propose a cross-dimensional attention fusion network (CAFN) to effectively achieve cross-dimensional inter-action with long-range dependencies. Specifically, the proposed approach involves the utilization of a cross-dimensional aggrega-tion module (CAM) to effectively capture contextual information by integrating both spatial and channel importance maps. The design of information fusion module (IFM) in CAM serves as a bridge for parallel dual-attention information fusion. In addition, a novel memory-adaptive multi-stage (MAMS) training method is proposed. We perform warm-start retraining with the same setting as the previous stage, without increasing memory consumption. If the memory is sufficient, we finetune the model with a larger patch size after the warm-start. The experimental results definitively demonstrate the superior performance of our cross-dimensional attention fusion network and training strategy compared to state-of-the-art (SOTA) methods, as evidenced by both quantitative and qualitative metrics.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"909-923"},"PeriodicalIF":3.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No-Reference VMAF: A Deep Neural Network-Based Approach to Blind Video Quality Assessment 无参照 VMAF:基于深度神经网络的盲目视频质量评估方法
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-19 DOI: 10.1109/TBC.2024.3399479
Axel De Decker;Jan De Cock;Peter Lambert;Glenn Van Wallendael
As the demand for high-quality video content continues to rise, accurately assessing the visual quality of digital videos has become more crucial than ever before. However, evaluating the perceptual quality of an impaired video in the absence of the original reference signal remains a significant challenge. To address this problem, we propose a novel No-Reference (NR) video quality metric called NR-VMAF. Our method is designed to replicate the popular Full-Reference (FR) metric VMAF in scenarios where the reference signal is unavailable or impractical to obtain. Like its FR counterpart, NR-VMAF is tailored specifically for measuring video quality in the presence of compression and scaling artifacts. The proposed model utilizes a deep convolutional neural network to extract quality-aware features from the pixel information of the distorted video, thereby eliminating the need for manual feature engineering. By adopting a patch-based approach, we are able to process high-resolution video data without any information loss. While the current model is trained solely on H.265/HEVC videos, its performance is verified on subjective datasets containing mainly H.264/AVC content. We demonstrate that NR-VMAF outperforms current state-of-the-art NR metrics while achieving a prediction accuracy that is comparable to VMAF and other FR metrics. Based on this strong performance, we believe that NR-VMAF is a viable approach to efficient and reliable No-Reference video quality assessment.
随着人们对高质量视频内容的需求不断增加,准确评估数字视频的视觉质量变得比以往任何时候都更加重要。然而,在没有原始参考信号的情况下评估受损视频的感知质量仍然是一项重大挑战。为了解决这个问题,我们提出了一种名为 NR-VMAF 的新型无参考(NR)视频质量度量方法。我们的方法旨在复制流行的全参考(FR)指标 VMAF,以应对参考信号不可用或无法获取的情况。与 FR 指标一样,NR-VMAF 专为测量存在压缩和缩放伪影的视频质量而量身定制。所提出的模型利用深度卷积神经网络从失真视频的像素信息中提取质量感知特征,从而消除了人工特征工程的需要。通过采用基于补丁的方法,我们能够在不丢失任何信息的情况下处理高分辨率视频数据。虽然当前的模型仅在 H.265/HEVC 视频上进行了训练,但其性能在主要包含 H.264/AVC 内容的主观数据集上得到了验证。我们证明,NR-VMAF 的性能优于当前最先进的 NR 指标,同时预测准确率与 VMAF 和其他 FR 指标相当。基于这种强大的性能,我们相信 NR-VMAF 是高效可靠的无参考视频质量评估的可行方法。
{"title":"No-Reference VMAF: A Deep Neural Network-Based Approach to Blind Video Quality Assessment","authors":"Axel De Decker;Jan De Cock;Peter Lambert;Glenn Van Wallendael","doi":"10.1109/TBC.2024.3399479","DOIUrl":"10.1109/TBC.2024.3399479","url":null,"abstract":"As the demand for high-quality video content continues to rise, accurately assessing the visual quality of digital videos has become more crucial than ever before. However, evaluating the perceptual quality of an impaired video in the absence of the original reference signal remains a significant challenge. To address this problem, we propose a novel No-Reference (NR) video quality metric called NR-VMAF. Our method is designed to replicate the popular Full-Reference (FR) metric VMAF in scenarios where the reference signal is unavailable or impractical to obtain. Like its FR counterpart, NR-VMAF is tailored specifically for measuring video quality in the presence of compression and scaling artifacts. The proposed model utilizes a deep convolutional neural network to extract quality-aware features from the pixel information of the distorted video, thereby eliminating the need for manual feature engineering. By adopting a patch-based approach, we are able to process high-resolution video data without any information loss. While the current model is trained solely on H.265/HEVC videos, its performance is verified on subjective datasets containing mainly H.264/AVC content. We demonstrate that NR-VMAF outperforms current state-of-the-art NR metrics while achieving a prediction accuracy that is comparable to VMAF and other FR metrics. Based on this strong performance, we believe that NR-VMAF is a viable approach to efficient and reliable No-Reference video quality assessment.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"844-861"},"PeriodicalIF":3.2,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Reference-Based Cross-Scale Feature Fusion for Compressed Video Super Resolution 基于多参考的跨尺度特征融合,实现压缩视频超分辨率
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-13 DOI: 10.1109/TBC.2024.3407517
Lu Chen;Mao Ye;Luping Ji;Shuai Li;Hongwei Guo
To save transmission bandwidth, there exists an approach to down-sample a video and then up-sample the compressed video to save bit rates. The existing Super Resolution (SR) methods generally design powerful networks to compensate the loss information introduced by down-sampling. But the information of entire video is not fully utilized and effectively fused, resulting in the learned context information that is not enough for the high quality reconstruction. We propose a multi high-quality frames Referenced Cross-scale compressed Video Super Resolution method (RCVSR) that wisely uses past-and-future information, to pursue higher compression efficiency. Specifically, a joint reference motion alignment module is proposed. Low resolution (LR) frame after up-sampling is separately aligned with past-and-future reference frames to preserve more spatial details; at the same time this LR frame is aligned with neighborhood frames to get continuous motion information and similar contents. Then, a reference based refinement module is applied to compensate motion and lost texture details by computing similarity matrix across channel dimensions. Finally, an attention guided dual-branch residual module is employed to enhance the reconstructed result concurrently. Compared with the HEVC anchor, the average gain of Bjontegaard Delta Rate (BD-Rate) under the Low-Delay-P (LDP) setting is 24.86%. In addition, an experimental comparison is made with the advanced SR methods and compressed video quality enhancement (VQE) methods, and the superior efficiency and generalization of the proposed algorithm are further reported.
为了节省传输带宽,有一种方法是对视频进行下采样,然后对压缩后的视频进行上采样,以节省比特率。现有的超分辨率(SR)方法一般会设计功能强大的网络来弥补下采样带来的信息损失。但是,整个视频的信息并没有得到充分利用和有效融合,导致学习到的上下文信息不足以进行高质量的重建。我们提出了一种多高质量帧参考跨尺度压缩视频超分辨率方法(RCVSR),该方法合理利用过去和未来信息,追求更高的压缩效率。具体来说,我们提出了一个联合参考运动对齐模块。将上采样后的低分辨率(LR)帧分别与过去和未来的参考帧对齐,以保留更多空间细节;同时将 LR 帧与邻近帧对齐,以获得连续的运动信息和相似的内容。然后,应用基于参考的细化模块,通过计算跨通道维度的相似性矩阵来补偿运动和丢失的纹理细节。最后,采用注意力引导的双分支残差模块来同时增强重建结果。与 HEVC 锚点相比,在低延迟 P(LDP)设置下,Bjontegaard Delta Rate(BD-Rate)的平均增益为 24.86%。此外,还与先进的 SR 方法和压缩视频质量增强(VQE)方法进行了实验比较,并进一步报告了所提算法的卓越效率和通用性。
{"title":"Multi-Reference-Based Cross-Scale Feature Fusion for Compressed Video Super Resolution","authors":"Lu Chen;Mao Ye;Luping Ji;Shuai Li;Hongwei Guo","doi":"10.1109/TBC.2024.3407517","DOIUrl":"10.1109/TBC.2024.3407517","url":null,"abstract":"To save transmission bandwidth, there exists an approach to down-sample a video and then up-sample the compressed video to save bit rates. The existing Super Resolution (SR) methods generally design powerful networks to compensate the loss information introduced by down-sampling. But the information of entire video is not fully utilized and effectively fused, resulting in the learned context information that is not enough for the high quality reconstruction. We propose a multi high-quality frames Referenced Cross-scale compressed Video Super Resolution method (RCVSR) that wisely uses past-and-future information, to pursue higher compression efficiency. Specifically, a joint reference motion alignment module is proposed. Low resolution (LR) frame after up-sampling is separately aligned with past-and-future reference frames to preserve more spatial details; at the same time this LR frame is aligned with neighborhood frames to get continuous motion information and similar contents. Then, a reference based refinement module is applied to compensate motion and lost texture details by computing similarity matrix across channel dimensions. Finally, an attention guided dual-branch residual module is employed to enhance the reconstructed result concurrently. Compared with the HEVC anchor, the average gain of Bjontegaard Delta Rate (BD-Rate) under the Low-Delay-P (LDP) setting is 24.86%. In addition, an experimental comparison is made with the advanced SR methods and compressed video quality enhancement (VQE) methods, and the superior efficiency and generalization of the proposed algorithm are further reported.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"895-908"},"PeriodicalIF":3.2,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Efficiency Optimization Method of WDM Visible Light Communication System for Indoor Broadcasting Networks 用于室内广播网络的波分复用可见光通信系统的能效优化方法
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-13 DOI: 10.1109/tbc.2024.3407606
Dayu Shi, Xun Zhang, Ziqi Liu, Xuanbang Chen, Jianghao Li, Xiaodong Liu, William Shieh
{"title":"Energy Efficiency Optimization Method of WDM Visible Light Communication System for Indoor Broadcasting Networks","authors":"Dayu Shi, Xun Zhang, Ziqi Liu, Xuanbang Chen, Jianghao Li, Xiaodong Liu, William Shieh","doi":"10.1109/tbc.2024.3407606","DOIUrl":"https://doi.org/10.1109/tbc.2024.3407606","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"43 1","pages":""},"PeriodicalIF":4.5,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Sobolev Norm-Based Variational Approach to Companding for PAPR Reduction in OFDM Systems 一种基于索波列夫规范的变分法,用于在 OFDM 系统中进行压缩以降低 PAPR
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-12 DOI: 10.1109/TBC.2024.3405346
Stephen DelMarco
In this paper we present a new approach to high-performance compander design to reduce the peak-to-average power ratio (PAPR) that typically occurs in orthogonal frequency division multiplexing (OFDM) systems. Whereas many current compander designs assume a parametric model for the form of the transformed Rayleigh amplitude distribution, we define a constrained optimization problem for the functional form of the transformed distribution. We determine an optimal distribution which minimally deviates from the Rayleigh distribution and use a Sobolev norm to quantify distance. Use of the Sobolev norm imposes smoothness constraints on the transformed distribution, which are associated with lower out-of-band interference levels. We incorporate Lagrange multipliers into the problem formulation to enforce the constant power and probability density function constraints. We solve the constrained optimization problem using techniques from the Calculus of Variations and discuss compander and decompander design. We investigate the effect of incorporating derivative information, into the optimization formulation, on compander performance. We demonstrate compander performance through numerical simulation and compare compander performance to performance from a state-of-the-art variational compander which does not use derivative information in the formulation. We demonstrate performance improvements in out-of-band power rejection using the new compander.
在本文中,我们提出了一种高性能复合器设计的新方法,以降低通常在正交频分复用(OFDM)系统中出现的峰均功率比(PAPR)。目前的许多合成器设计都假定变换后的瑞利振幅分布形式为参数模型,而我们则为变换后分布的函数形式定义了一个约束优化问题。我们确定了一个与瑞利分布偏差最小的最优分布,并使用索博廖夫规范对距离进行量化。Sobolev 准则的使用对变换后的分布施加了平滑性约束,这与较低的带外干扰水平有关。我们将拉格朗日乘法器纳入问题表述中,以强制执行恒定功率和概率密度函数约束。我们利用变分计算的技术解决了约束优化问题,并讨论了合成器和分解器的设计。我们研究了在优化公式中加入导数信息对合成器性能的影响。我们通过数值模拟展示了编译器的性能,并将编译器的性能与最先进的变分编译器的性能进行了比较,后者在公式中没有使用导数信息。我们展示了使用新的合成器在带外功率抑制方面的性能改进。
{"title":"A Sobolev Norm-Based Variational Approach to Companding for PAPR Reduction in OFDM Systems","authors":"Stephen DelMarco","doi":"10.1109/TBC.2024.3405346","DOIUrl":"10.1109/TBC.2024.3405346","url":null,"abstract":"In this paper we present a new approach to high-performance compander design to reduce the peak-to-average power ratio (PAPR) that typically occurs in orthogonal frequency division multiplexing (OFDM) systems. Whereas many current compander designs assume a parametric model for the form of the transformed Rayleigh amplitude distribution, we define a constrained optimization problem for the functional form of the transformed distribution. We determine an optimal distribution which minimally deviates from the Rayleigh distribution and use a Sobolev norm to quantify distance. Use of the Sobolev norm imposes smoothness constraints on the transformed distribution, which are associated with lower out-of-band interference levels. We incorporate Lagrange multipliers into the problem formulation to enforce the constant power and probability density function constraints. We solve the constrained optimization problem using techniques from the Calculus of Variations and discuss compander and decompander design. We investigate the effect of incorporating derivative information, into the optimization formulation, on compander performance. We demonstrate compander performance through numerical simulation and compare compander performance to performance from a state-of-the-art variational compander which does not use derivative information in the formulation. We demonstrate performance improvements in out-of-band power rejection using the new compander.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"955-962"},"PeriodicalIF":3.2,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Broadcasting Information for Authors 电气和电子工程师学会(IEEE)《关于广播作者信息的论文集》(IEEE Transactions on Broadcasting Information for Authors
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-07 DOI: 10.1109/TBC.2024.3408433
{"title":"IEEE Transactions on Broadcasting Information for Authors","authors":"","doi":"10.1109/TBC.2024.3408433","DOIUrl":"https://doi.org/10.1109/TBC.2024.3408433","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"C3-C4"},"PeriodicalIF":4.5,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10552064","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Broadcasting Publication Information 电气和电子工程师学会《广播学报》出版信息
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-07 DOI: 10.1109/TBC.2024.3408431
{"title":"IEEE Transactions on Broadcasting Publication Information","authors":"","doi":"10.1109/TBC.2024.3408431","DOIUrl":"https://doi.org/10.1109/TBC.2024.3408431","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"C2-C2"},"PeriodicalIF":4.5,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10552065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141292550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Broadcasting
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1