Pub Date : 2025-12-17DOI: 10.1109/TBC.2025.3640887
{"title":"2025 Scott Helt Memorial Award for the Best Paper Published in IEEE Transactions on Broadcasting","authors":"","doi":"10.1109/TBC.2025.3640887","DOIUrl":"https://doi.org/10.1109/TBC.2025.3640887","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 4","pages":"1108-1110"},"PeriodicalIF":4.8,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11302029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1109/TBC.2025.3640761
{"title":"IEEE Transactions on Broadcasting Information for Readers and Authors","authors":"","doi":"10.1109/TBC.2025.3640761","DOIUrl":"https://doi.org/10.1109/TBC.2025.3640761","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 4","pages":"C3-C4"},"PeriodicalIF":4.8,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11302004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Light Field Image (LFI) has garnered remarkable interest and fascination due to its burgeoning significance in immersive applications. Although the abundant information in LFIs enables a more immersive experience, it also poses a greater challenge for Light Field Image Quality Assessment (LFIQA), especially when reference information is inaccessible. In this paper, inspired by the holistic visual perception of high-dimensional LFIs and neuroscience studies on the Human Visual System (HVS), we propose a novel Blind Light Field image quality assessment metric by exploring MultiPlane Texture and Multilevel Wavelet Information, abbreviated as MPT-MWI-BLiF. Specifically, considering the texture sensitivity of the secondary visual cortex (V2), we first convert LFIs into multiple individual planes and capture textural variations from these planes. Then, the statistical histogram of textural variations for all planes is calculated as holistic textural variation features. In addition, motivated by the fact that neuronal responses in the visual cortex are frequency-dependent, we simulate this visual perception process by decomposing LFIs into multilevel wavelet subbands with Four-Dimensional Discrete Haar Wavelet Transform (4D-DHWT). After that, the subband geometric features of first-level 4D-DHWT subbands and the coefficient intensity features of second-level 4D-DHWT subbands are computed respectively. Finally, we combine all the extracted quality-aware features and employ the widely-used Support Vector Regression (SVR) to predict the perceptual quality of LFIs. To fully validate the effectiveness of the proposed metric, we perform extensive experiments on five representative LFIQA databases with two cross-validation methods. Experimental results demonstrate the superiority of the proposed metric in quality evaluation, as well as its low time complexity compared to other state-of-the-art metrics. The full code will be publicly available at https://github.com/ZhengyuZhang96/MPT-MWI-BLiF
{"title":"Blind Light Field Image Quality Assessment Using Multiplane Texture and Multilevel Wavelet Information","authors":"Zhengyu Zhang;Shishun Tian;Jianjun Xiang;Wenbin Zou;Luce Morin;Lu Zhang","doi":"10.1109/TBC.2025.3627787","DOIUrl":"https://doi.org/10.1109/TBC.2025.3627787","url":null,"abstract":"Light Field Image (LFI) has garnered remarkable interest and fascination due to its burgeoning significance in immersive applications. Although the abundant information in LFIs enables a more immersive experience, it also poses a greater challenge for Light Field Image Quality Assessment (LFIQA), especially when reference information is inaccessible. In this paper, inspired by the holistic visual perception of high-dimensional LFIs and neuroscience studies on the Human Visual System (HVS), we propose a novel Blind Light Field image quality assessment metric by exploring MultiPlane Texture and Multilevel Wavelet Information, abbreviated as MPT-MWI-BLiF. Specifically, considering the texture sensitivity of the secondary visual cortex (V2), we first convert LFIs into multiple individual planes and capture textural variations from these planes. Then, the statistical histogram of textural variations for all planes is calculated as holistic textural variation features. In addition, motivated by the fact that neuronal responses in the visual cortex are frequency-dependent, we simulate this visual perception process by decomposing LFIs into multilevel wavelet subbands with Four-Dimensional Discrete Haar Wavelet Transform (4D-DHWT). After that, the subband geometric features of first-level 4D-DHWT subbands and the coefficient intensity features of second-level 4D-DHWT subbands are computed respectively. Finally, we combine all the extracted quality-aware features and employ the widely-used Support Vector Regression (SVR) to predict the perceptual quality of LFIs. To fully validate the effectiveness of the proposed metric, we perform extensive experiments on five representative LFIQA databases with two cross-validation methods. Experimental results demonstrate the superiority of the proposed metric in quality evaluation, as well as its low time complexity compared to other state-of-the-art metrics. The full code will be publicly available at <uri>https://github.com/ZhengyuZhang96/MPT-MWI-BLiF</uri>","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 4","pages":"1092-1107"},"PeriodicalIF":4.8,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145765632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1109/TBC.2025.3622337
Morteza Poudineh;Alireza Esmaeilzehi;M. Omair Ahmad
Image super resolution focuses on increasing the spatial resolution of low-quality images and enhancing their visual quality. Since the image degradation process is unknown in real-life scenarios, it is crucial to perform image super resolution in a blind manner. Diffusion models have revolutionized the task of blind image super resolution in view of their powerful capability of producing realistic textures and structures. Design of the condition network is a key factor for diffusion models in providing high image super resolution performances. In this regard, we develop an effective image restoration bank by using a three-stage learning algorithm based on the idea of unsupervised learning, and feed its results, wherein visual artifacts are remarkably suppressed, to the condition network. The use of the unsupervised learning in the design of our image restoration bank guarantees that both diverse contextual information of visual signals, as well as, different degradation operations are considered for the task of blind image super resolution. Further, we guide the feature generation process of the condition network in such a way that the fidelity of the feature tensors produced for the task of image super resolution remains high. The results of extensive experiments show the superiority of our method over the state-of-the-art blind image super resolution schemes in the case of various benchmark datasets.
{"title":"IRBFusion: Diffusion-Based Blind Image Super Resolution Using Unsupervised Learning and Bank of Restoration Networks","authors":"Morteza Poudineh;Alireza Esmaeilzehi;M. Omair Ahmad","doi":"10.1109/TBC.2025.3622337","DOIUrl":"https://doi.org/10.1109/TBC.2025.3622337","url":null,"abstract":"Image super resolution focuses on increasing the spatial resolution of low-quality images and enhancing their visual quality. Since the image degradation process is unknown in real-life scenarios, it is crucial to perform image super resolution in a blind manner. Diffusion models have revolutionized the task of blind image super resolution in view of their powerful capability of producing realistic textures and structures. Design of the condition network is a key factor for diffusion models in providing high image super resolution performances. In this regard, we develop an effective image restoration bank by using a three-stage learning algorithm based on the idea of unsupervised learning, and feed its results, wherein visual artifacts are remarkably suppressed, to the condition network. The use of the unsupervised learning in the design of our image restoration bank guarantees that both diverse contextual information of visual signals, as well as, different degradation operations are considered for the task of blind image super resolution. Further, we guide the feature generation process of the condition network in such a way that the fidelity of the feature tensors produced for the task of image super resolution remains high. The results of extensive experiments show the superiority of our method over the state-of-the-art blind image super resolution schemes in the case of various benchmark datasets.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 4","pages":"1048-1064"},"PeriodicalIF":4.8,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geographic Segmented Localcasting (GSL) is an emerging Digital Terrestrial Television Broadcast (DTTB) physical layer operating mode. The system utilizes LDM-SFN to enable both wide-area Single Frequency Network (SFN) coverage and localized broadcast/multicast services within a single Radio Frequency (RF) broadcast channel. By combining SFN (core layer) and localcasting (enhanced layer) via Layered Division Multiplexing (LDM), GSL improves spectrum efficiency but faces challenges such as co-channel interference and SFN vs. Localcasting Channel Profile Mismatch (LCPM), which limit localcasting coverage. This paper presents two receiving methods that facilitate LDPC-coded LDM signal reception. Method 1 is the Multiple localcasting signals Iterative Joint Detection and Decoding (IJDD), which can mitigate severe co-channel interference with required Channel Status Information (CSI) of nearby localcasting transmitters. Method 2 is the Constellation Rotated IJDD (CR-IJDD), which can mitigate severe LCPM without CSI. The proposed methods enable decoding of both desired and interfering signals under high SNR conditions, enhancing spectrum reuse. Additionally, an Early Extrinsic Information Exchange for LDPC Iteration Reduction (EEIE-LIR) scheme is introduced to accelerate convergence and reduce receiver complexity. Evaluations based on ATSC 3.0 ModCods demonstrate that the proposed methods significantly improve spectrum efficiency and mitigate the co-channel interferences. The proposed technologies and can be extended to other DTTB systems and cell-based broadband wireless networks (e.g., the fifth generation (5G)/the sixth generation (6G), supporting seamless integration of broadcast, multicast, and unicast services.
{"title":"Geographic Segmented Localcasting Co-Channel Interference Mitigation Using Iterative Joint Detection and Decoding","authors":"Hao Ju;Yin Xu;Dazhi He;Haoyang Li;Wenjun Zhang;Yiyan Wu","doi":"10.1109/TBC.2025.3579222","DOIUrl":"https://doi.org/10.1109/TBC.2025.3579222","url":null,"abstract":"Geographic Segmented Localcasting (GSL) is an emerging Digital Terrestrial Television Broadcast (DTTB) physical layer operating mode. The system utilizes LDM-SFN to enable both wide-area Single Frequency Network (SFN) coverage and localized broadcast/multicast services within a single Radio Frequency (RF) broadcast channel. By combining SFN (core layer) and localcasting (enhanced layer) via Layered Division Multiplexing (LDM), GSL improves spectrum efficiency but faces challenges such as co-channel interference and SFN vs. Localcasting Channel Profile Mismatch (LCPM), which limit localcasting coverage. This paper presents two receiving methods that facilitate LDPC-coded LDM signal reception. Method 1 is the Multiple localcasting signals Iterative Joint Detection and Decoding (IJDD), which can mitigate severe co-channel interference with required Channel Status Information (CSI) of nearby localcasting transmitters. Method 2 is the Constellation Rotated IJDD (CR-IJDD), which can mitigate severe LCPM without CSI. The proposed methods enable decoding of both desired and interfering signals under high SNR conditions, enhancing spectrum reuse. Additionally, an Early Extrinsic Information Exchange for LDPC Iteration Reduction (EEIE-LIR) scheme is introduced to accelerate convergence and reduce receiver complexity. Evaluations based on ATSC 3.0 ModCods demonstrate that the proposed methods significantly improve spectrum efficiency and mitigate the co-channel interferences. The proposed technologies and can be extended to other DTTB systems and cell-based broadband wireless networks (e.g., the fifth generation (5G)/the sixth generation (6G), supporting seamless integration of broadcast, multicast, and unicast services.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 4","pages":"941-953"},"PeriodicalIF":4.8,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-23DOI: 10.1109/TBC.2025.3611669
Xun Ji;Xu Wang;Li-Ying Hao;Chengtao Cai;Chengsong Dai;Ryan Wen Liu
Underwater image enhancement (UIE) aims to provide high-quality observations of challenging underwater scenarios, which is of great significance for various broadcast technologies. Extensive non-learning-based and learning-based UIE methods have been presented and applied. However, non-learning-based strategies typically struggle to demonstrate superior generalization capabilities, while learning-based strategies generally suffer from potential over- or under-enhancement due to the lack of sufficient prior knowledge. To address the challenges above, this paper presents a heuristic cooperative-competitive network, termed Co2Net. Specifically, our Co2Net integrates non-learning mechanisms into the deep learning framework to achieve information fusion from explainable prior knowledge and discernible hierarchical features, thereby facilitating promising and reasonable enhancement of degraded underwater images. Furthermore, our Co2Net adopts a hybrid convolutional neural network (CNN)-Transformer architecture, which comprises successive cooperative-competitive modules (Co2Ms) to achieve adequate extraction, representation, and transmission of both prior knowledge and discernible features. Comprehensive experiments are conducted to demonstrate the superiority and universality of our proposed Co2Net, and sufficient ablation studies are also performed to reveal the effectiveness of each component within our model. The source code is available at https://github.com/jixun-dmu/Co2Net
{"title":"Fusing Learning and Non-Learning: Hybrid CNN-Transformer Cooperative-Competitive Network for Underwater Image Enhancement","authors":"Xun Ji;Xu Wang;Li-Ying Hao;Chengtao Cai;Chengsong Dai;Ryan Wen Liu","doi":"10.1109/TBC.2025.3611669","DOIUrl":"https://doi.org/10.1109/TBC.2025.3611669","url":null,"abstract":"Underwater image enhancement (UIE) aims to provide high-quality observations of challenging underwater scenarios, which is of great significance for various broadcast technologies. Extensive non-learning-based and learning-based UIE methods have been presented and applied. However, non-learning-based strategies typically struggle to demonstrate superior generalization capabilities, while learning-based strategies generally suffer from potential over- or under-enhancement due to the lack of sufficient prior knowledge. To address the challenges above, this paper presents a heuristic cooperative-competitive network, termed Co<sup>2</sup>Net. Specifically, our Co<sup>2</sup>Net integrates non-learning mechanisms into the deep learning framework to achieve information fusion from explainable prior knowledge and discernible hierarchical features, thereby facilitating promising and reasonable enhancement of degraded underwater images. Furthermore, our Co<sup>2</sup>Net adopts a hybrid convolutional neural network (CNN)-Transformer architecture, which comprises successive cooperative-competitive modules (Co<sup>2</sup>Ms) to achieve adequate extraction, representation, and transmission of both prior knowledge and discernible features. Comprehensive experiments are conducted to demonstrate the superiority and universality of our proposed Co<sup>2</sup>Net, and sufficient ablation studies are also performed to reveal the effectiveness of each component within our model. The source code is available at <uri>https://github.com/jixun-dmu/Co2Net</uri>","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 4","pages":"1065-1078"},"PeriodicalIF":4.8,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The learned image compression (LIC) methods have already surpassed traditional techniques in compressing natural scene (NS) images. However, directly applying these methods to screen content (SC) images, which possess distinct characteristics such as sharp edges, repetitive patterns, embedded text and graphics, yields suboptimal results. This paper addresses three key challenges in SC image compression: learning compact latent features, adapting quantization step sizes, and the lack of large SC datasets. To overcome these challenges, we propose a novel compression method that employs a multi-frequency two-stage octave residual block (MToRB) for feature extraction, a cascaded triple-scale feature fusion residual block (CTSFRB) for multi-scale feature integration and a multi-frequency context interaction module (MFCIM) to reduce inter-frequency correlations. Additionally, we introduce an adaptive quantization module that learns scaled uniform noise for each frequency component, enabling flexible control over quantization granularity. Furthermore, we construct a large SC image compression dataset (SDU-SCICD10K), which includes over 10,000 images spanning basic SC images, computer-rendered images, and mixed NS and SC images from both PC and mobile platforms. Experimental results demonstrate that our approach significantly improves SC image compression performance, outperforming traditional standards and state-of-the-art learning-based methods in terms of peak signal-to-noise ratio (PSNR) and multi-scale structural similarity (MS-SSIM). The source code will be opened later in https://github.com/SunshineSki/Screen-content-image-dataset/tree/main/SDU-SCICD10K
{"title":"FD-LSCIC: Frequency Decomposition-Based Learned Screen Content Image Compression","authors":"Shiqi Jiang;Hui Yuan;Shuai Li;Huanqiang Zeng;Sam Kwong","doi":"10.1109/TBC.2025.3609052","DOIUrl":"https://doi.org/10.1109/TBC.2025.3609052","url":null,"abstract":"The learned image compression (LIC) methods have already surpassed traditional techniques in compressing natural scene (NS) images. However, directly applying these methods to screen content (SC) images, which possess distinct characteristics such as sharp edges, repetitive patterns, embedded text and graphics, yields suboptimal results. This paper addresses three key challenges in SC image compression: learning compact latent features, adapting quantization step sizes, and the lack of large SC datasets. To overcome these challenges, we propose a novel compression method that employs a multi-frequency two-stage octave residual block (MToRB) for feature extraction, a cascaded triple-scale feature fusion residual block (CTSFRB) for multi-scale feature integration and a multi-frequency context interaction module (MFCIM) to reduce inter-frequency correlations. Additionally, we introduce an adaptive quantization module that learns scaled uniform noise for each frequency component, enabling flexible control over quantization granularity. Furthermore, we construct a large SC image compression dataset (SDU-SCICD10K), which includes over 10,000 images spanning basic SC images, computer-rendered images, and mixed NS and SC images from both PC and mobile platforms. Experimental results demonstrate that our approach significantly improves SC image compression performance, outperforming traditional standards and state-of-the-art learning-based methods in terms of peak signal-to-noise ratio (PSNR) and multi-scale structural similarity (MS-SSIM). The source code will be opened later in <uri>https://github.com/SunshineSki/Screen-content-image-dataset/tree/main/SDU-SCICD10K</uri>","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 4","pages":"1034-1047"},"PeriodicalIF":4.8,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Screen content videos (SCVs) have been widely used in television broadcasting, video conferencing, online education, and other fields. VVC is a new generation video coding standard for SCVs, where the quantization parameter (QP) is one of the key coding parameters that significantly affects the coding efficiency of SCVs. The method of selecting optimal QPs for pictures located at different temporal layers is called quantization parameter cascading (QPC). The QPC method recommended by VVC test model, i.e., VTM, does not take into account the impact of video content characteristics on QP selection, resulting in lower coding efficiency of SCVs. To address this issue, a QPC method driven by the rate-distortion (R-D) optimization for SCVs (QPC-SCV), was proposed. Combining experiments and the hybrid coding framework principles of VVC, a novel R-D cost function applicable to the SCV coding characteristics was first established and validated, where the spatiotemporal content characteristics of SCVs were evaluated to predict the model parameters. Then, a video motion form classification and particle swarm optimization were further proposed to effectively solve the R-D cost function and obtain optimized QPs. Compared with the QPC recommended by VTM, the QPC-SCV improves the coding efficiency of SCVs while reducing the coding time. For all test sequences, the average BD-rate corresponding to the QPC-SCV is −6.90%, and the average coding time is reduced by 4.56%.
{"title":"Rate-Distortion-Optimization-Driven Quantization Parameter Cascading for Screen Content Video Coding Using VVC","authors":"Yanchao Gong;Yinghua Li;Baogui Li;Kaifang Yang;Nam Ling","doi":"10.1109/TBC.2025.3609039","DOIUrl":"https://doi.org/10.1109/TBC.2025.3609039","url":null,"abstract":"Screen content videos (SCVs) have been widely used in television broadcasting, video conferencing, online education, and other fields. VVC is a new generation video coding standard for SCVs, where the quantization parameter (QP) is one of the key coding parameters that significantly affects the coding efficiency of SCVs. The method of selecting optimal QPs for pictures located at different temporal layers is called quantization parameter cascading (QPC). The QPC method recommended by VVC test model, i.e., VTM, does not take into account the impact of video content characteristics on QP selection, resulting in lower coding efficiency of SCVs. To address this issue, a QPC method driven by the rate-distortion (R-D) optimization for SCVs (QPC-SCV), was proposed. Combining experiments and the hybrid coding framework principles of VVC, a novel R-D cost function applicable to the SCV coding characteristics was first established and validated, where the spatiotemporal content characteristics of SCVs were evaluated to predict the model parameters. Then, a video motion form classification and particle swarm optimization were further proposed to effectively solve the R-D cost function and obtain optimized QPs. Compared with the QPC recommended by VTM, the QPC-SCV improves the coding efficiency of SCVs while reducing the coding time. For all test sequences, the average BD-rate corresponding to the QPC-SCV is −6.90%, and the average coding time is reduced by 4.56%.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 4","pages":"993-1010"},"PeriodicalIF":4.8,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/TBC.2025.3603346
{"title":"IEEE Transactions on Broadcasting Information for Readers and Authors","authors":"","doi":"10.1109/TBC.2025.3603346","DOIUrl":"https://doi.org/10.1109/TBC.2025.3603346","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 3","pages":"C3-C4"},"PeriodicalIF":4.8,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11152557","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}