首页 > 最新文献

IEEE Transactions on Broadcasting最新文献

英文 中文
2024 Scott Helt Memorial Award for the Best Paper Published in the IEEE Transactions on Broadcasting 2024年斯科特·海尔特纪念奖,在IEEE广播事务中发表的最佳论文
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-11 DOI: 10.1109/TBC.2024.3492772
Presents the recipients of (Scott Helt Memorial Award) awards for (2024).
颁发(斯科特·海尔特纪念奖)(2024年)的获奖者。
{"title":"2024 Scott Helt Memorial Award for the Best Paper Published in the IEEE Transactions on Broadcasting","authors":"","doi":"10.1109/TBC.2024.3492772","DOIUrl":"https://doi.org/10.1109/TBC.2024.3492772","url":null,"abstract":"Presents the recipients of (Scott Helt Memorial Award) awards for (2024).","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 4","pages":"1316-1317"},"PeriodicalIF":3.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10790558","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Broadcasting Information for Authors IEEE作者广播信息汇刊
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-11 DOI: 10.1109/TBC.2024.3495317
{"title":"IEEE Transactions on Broadcasting Information for Authors","authors":"","doi":"10.1109/TBC.2024.3495317","DOIUrl":"https://doi.org/10.1109/TBC.2024.3495317","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 4","pages":"C3-C4"},"PeriodicalIF":3.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10790559","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Omnidirectional Image Quality Assessment With Mutual Distillation
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-03 DOI: 10.1109/TBC.2024.3503435
Pingchuan Ma;Lixiong Liu;Chengzhi Xiao;Dong Xu
There exists complementary relationship between different projection formats of omnidirectional images. However, most existing omnidirectional image quality assessment (OIQA) works only operate solely on single projection format, and rarely explore the solutions on different projection formats. To this end, we propose a mutual distillation-based omnidirectional image quality assessment method, abbreviated as MD-OIQA. The MD-OIQA explores the complementary relationship between different projection formats to improve the feature representation of omnidirectional images for quality prediction. Specifically, we separately feed equirectangular projection (ERP) and cubemap projection (CMP) images into two peer student networks to capture quality-aware features of specific projection contents. Meanwhile, we propose a self-adaptive mutual distillation module (SAMDM) that deploys mutual distillation at multiple network stages to achieve the mutual learning between the two networks. The proposed SAMDM is able to capture the useful knowledge from the dynamic optimized networks to improve the effect of mutual distillation by enhancing the feature interactions through a deep cross network and generating masks to efficiently capture the complementary information from different projection contents. Finally, the features extracted from single projection content are used for quality prediction. The experiment results on three public databases demonstrate that the proposed method can efficiently improve the model representation capability and achieves superior performance.
{"title":"Omnidirectional Image Quality Assessment With Mutual Distillation","authors":"Pingchuan Ma;Lixiong Liu;Chengzhi Xiao;Dong Xu","doi":"10.1109/TBC.2024.3503435","DOIUrl":"https://doi.org/10.1109/TBC.2024.3503435","url":null,"abstract":"There exists complementary relationship between different projection formats of omnidirectional images. However, most existing omnidirectional image quality assessment (OIQA) works only operate solely on single projection format, and rarely explore the solutions on different projection formats. To this end, we propose a mutual distillation-based omnidirectional image quality assessment method, abbreviated as MD-OIQA. The MD-OIQA explores the complementary relationship between different projection formats to improve the feature representation of omnidirectional images for quality prediction. Specifically, we separately feed equirectangular projection (ERP) and cubemap projection (CMP) images into two peer student networks to capture quality-aware features of specific projection contents. Meanwhile, we propose a self-adaptive mutual distillation module (SAMDM) that deploys mutual distillation at multiple network stages to achieve the mutual learning between the two networks. The proposed SAMDM is able to capture the useful knowledge from the dynamic optimized networks to improve the effect of mutual distillation by enhancing the feature interactions through a deep cross network and generating masks to efficiently capture the complementary information from different projection contents. Finally, the features extracted from single projection content are used for quality prediction. The experiment results on three public databases demonstrate that the proposed method can efficiently improve the model representation capability and achieves superior performance.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"264-276"},"PeriodicalIF":3.2,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VMG: Rethinking U-Net Architecture for Video Super-Resolution
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-21 DOI: 10.1109/TBC.2024.3486967
Jun Tang;Lele Niu;Linlin Liu;Hang Dai;Yong Ding
The U-Net architecture has exhibited significant efficacy across various vision tasks, yet its adaptation for Video Super-Resolution (VSR) remains underexplored. While the Video Restoration Transformer (VRT) introduced U-Net into the VSR domain, it poses challenges due to intricate design and substantial computational overhead. In this paper, we present VMG, a streamlined framework tailored for VSR. Through empirical analysis, we identify the crucial stages of the U-Net architecture contributing to performance enhancement in VSR tasks. Our optimized architecture substantially reduces model parameters and complexity while improving performance. Additionally, we introduce two key modules, namely the Gated MLP-like Mixer (GMM) and the Flow-Guided cross-attention Mixer (FGM), designed to enhance spatial and temporal feature aggregation. GMM dynamically encodes spatial correlations with linear complexity in space and time, and FGM leverages optical flow to capture motion variation and implement sparse attention to efficiently aggregate temporally related information. Extensive experiments demonstrate that VMG achieves nearly 70% reduction in GPU memory usage, 30% fewer parameters, and 10% lower computational complexity (FLOPs) compared to VRT, while yielding highly competitive or superior results across four benchmark datasets. Qualitative assessments reveal VMG’s ability to preserve remarkable details and sharp structures in the reconstructed videos. The code and pre-trained models are available at https://github.com/EasyVision-Ton/VMG.
{"title":"VMG: Rethinking U-Net Architecture for Video Super-Resolution","authors":"Jun Tang;Lele Niu;Linlin Liu;Hang Dai;Yong Ding","doi":"10.1109/TBC.2024.3486967","DOIUrl":"https://doi.org/10.1109/TBC.2024.3486967","url":null,"abstract":"The U-Net architecture has exhibited significant efficacy across various vision tasks, yet its adaptation for Video Super-Resolution (VSR) remains underexplored. While the Video Restoration Transformer (VRT) introduced U-Net into the VSR domain, it poses challenges due to intricate design and substantial computational overhead. In this paper, we present VMG, a streamlined framework tailored for VSR. Through empirical analysis, we identify the crucial stages of the U-Net architecture contributing to performance enhancement in VSR tasks. Our optimized architecture substantially reduces model parameters and complexity while improving performance. Additionally, we introduce two key modules, namely the Gated MLP-like Mixer (GMM) and the Flow-Guided cross-attention Mixer (FGM), designed to enhance spatial and temporal feature aggregation. GMM dynamically encodes spatial correlations with linear complexity in space and time, and FGM leverages optical flow to capture motion variation and implement sparse attention to efficiently aggregate temporally related information. Extensive experiments demonstrate that VMG achieves nearly 70% reduction in GPU memory usage, 30% fewer parameters, and 10% lower computational complexity (FLOPs) compared to VRT, while yielding highly competitive or superior results across four benchmark datasets. Qualitative assessments reveal VMG’s ability to preserve remarkable details and sharp structures in the reconstructed videos. The code and pre-trained models are available at <uri>https://github.com/EasyVision-Ton/VMG</uri>.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"334-349"},"PeriodicalIF":3.2,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Assessment of Physical Layer Performance: ATSC 3.0 vs. 5G Broadcast in Laboratory and Field Tests
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-21 DOI: 10.1109/TBC.2024.3482183
Sunhyoung Kwon;Seok-Ki Ahn;Sungjun Ahn;Sungho Jeon;Sesh Simha;Mark Aitken;Anindya Saha;Prashant M. Maru;Parag Naik;Sung-Ik Park
This paper presents a comparative analysis of the physical layer performance of ATSC 3.0 and 3GPP 5G Broadcast through comprehensive laboratory and field tests. The study evaluates various reception scenarios, including fixed and mobile environments and various channel conditions, such as additive white Gaussian noise and mobile channels. Key performance metrics such as threshold of visibility (ToV) and erroneous second ratio (ESR) are measured to assess the reception quality of each standard. The results demonstrate that ATSC 3.0 generally outperforms 5G Broadcast due to its advanced bit-interleaved coded modulation and time interleaving techniques, effectively mitigating burst errors in mobile channels.
{"title":"Comparative Assessment of Physical Layer Performance: ATSC 3.0 vs. 5G Broadcast in Laboratory and Field Tests","authors":"Sunhyoung Kwon;Seok-Ki Ahn;Sungjun Ahn;Sungho Jeon;Sesh Simha;Mark Aitken;Anindya Saha;Prashant M. Maru;Parag Naik;Sung-Ik Park","doi":"10.1109/TBC.2024.3482183","DOIUrl":"https://doi.org/10.1109/TBC.2024.3482183","url":null,"abstract":"This paper presents a comparative analysis of the physical layer performance of ATSC 3.0 and 3GPP 5G Broadcast through comprehensive laboratory and field tests. The study evaluates various reception scenarios, including fixed and mobile environments and various channel conditions, such as additive white Gaussian noise and mobile channels. Key performance metrics such as threshold of visibility (ToV) and erroneous second ratio (ESR) are measured to assess the reception quality of each standard. The results demonstrate that ATSC 3.0 generally outperforms 5G Broadcast due to its advanced bit-interleaved coded modulation and time interleaving techniques, effectively mitigating burst errors in mobile channels.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"2-10"},"PeriodicalIF":3.2,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised 3D Point Cloud Reconstruction via Exploring Multi-View Consistency and Complementarity
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-12 DOI: 10.1109/TBC.2024.3484269
Jiahui Song;Yonghong Hou;Bo Peng;Tianyi Qin;Qingming Huang;Jianjun Lei
Unsupervised 3D point cloud reconstruction has increasingly played an important role in 3D multimedia broadcasting, virtual reality, and augmented reality. Considering that multiple views collectively provide abundant object geometry and structure information, this paper proposes a novel Unsupervised Multi-View 3D Point Cloud Reconstruction Network (UMPR-Net) to reconstruct high-quality 3D point clouds by effectively exploring multi-view consistency and complementarity. In particular, by effectively perceiving the consistency of local object information contained in different views, a consistency-aware point cloud reconstruction module is designed to reconstruct 3D point clouds for each individual view. Additionally, a complementarity-oriented point cloud fusion module is presented to aggregate reliable complementary information explored from multiple point clouds corresponding to diverse views, thus ultimately obtaining a refined 3D point cloud. By projecting reconstructed 3D point clouds onto 2D planes and subsequently constraining the consistency between 2D projections and 2D supervision, the proposed UMPR-Net is encouraged to reconstruct high-quality 3D point clouds from multiple views. Experimental results on the synthetic and real-world datasets have validated the effectiveness of the proposed UMPR-Net.
{"title":"Unsupervised 3D Point Cloud Reconstruction via Exploring Multi-View Consistency and Complementarity","authors":"Jiahui Song;Yonghong Hou;Bo Peng;Tianyi Qin;Qingming Huang;Jianjun Lei","doi":"10.1109/TBC.2024.3484269","DOIUrl":"https://doi.org/10.1109/TBC.2024.3484269","url":null,"abstract":"Unsupervised 3D point cloud reconstruction has increasingly played an important role in 3D multimedia broadcasting, virtual reality, and augmented reality. Considering that multiple views collectively provide abundant object geometry and structure information, this paper proposes a novel <underline>U</u>nsupervised <underline>M</u>ulti-View 3D <underline>P</u>oint Cloud <underline>R</u>econstruction <underline>Net</u>work (UMPR-Net) to reconstruct high-quality 3D point clouds by effectively exploring multi-view consistency and complementarity. In particular, by effectively perceiving the consistency of local object information contained in different views, a consistency-aware point cloud reconstruction module is designed to reconstruct 3D point clouds for each individual view. Additionally, a complementarity-oriented point cloud fusion module is presented to aggregate reliable complementary information explored from multiple point clouds corresponding to diverse views, thus ultimately obtaining a refined 3D point cloud. By projecting reconstructed 3D point clouds onto 2D planes and subsequently constraining the consistency between 2D projections and 2D supervision, the proposed UMPR-Net is encouraged to reconstruct high-quality 3D point clouds from multiple views. Experimental results on the synthetic and real-world datasets have validated the effectiveness of the proposed UMPR-Net.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"193-202"},"PeriodicalIF":3.2,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception- and Fidelity-Aware Reduced-Reference Super-Resolution Image Quality Assessment
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-04 DOI: 10.1109/TBC.2024.3475820
Xinying Lin;Xuyang Liu;Hong Yang;Xiaohai He;Honggang Chen
With the advent of image super-resolution (SR) algorithms, how to evaluate the quality of generated SR images has become an urgent task. Although full-reference methods perform well in SR image quality assessment (SR-IQA), their reliance on high-resolution (HR) images limits their practical applicability. Leveraging available reconstruction information as much as possible for SR-IQA, such as low-resolution (LR) images and the scale factors, is a promising way to enhance assessment performance for SR-IQA without HR for reference. In this paper, we attempt to evaluate the perceptual quality and reconstruction fidelity of SR images considering LR images and scale factors. Specifically, we propose a novel dual-branch reduced-reference SR-IQA network, i.e., Perception- and Fidelity-aware SR-IQA (PFIQA). The perception-aware branch evaluates the perceptual quality of SR images by leveraging the merits of global modeling of Vision Transformer (ViT) and local relation of ResNet, and incorporating the scale factor to enable comprehensive visual perception. Meanwhile, the fidelity-aware branch assesses the reconstruction fidelity between LR and SR images through their visual perception. The combination of the two branches substantially aligns with the human visual system, enabling a comprehensive SR image evaluation. Experimental results indicate that our PFIQA outperforms current state-of-the-art models across three widely-used SR-IQA benchmarks. Notably, PFIQA excels in assessing the quality of real-world SR images. Our code is available at https://github.com/xinyouu/PFIQA.
{"title":"Perception- and Fidelity-Aware Reduced-Reference Super-Resolution Image Quality Assessment","authors":"Xinying Lin;Xuyang Liu;Hong Yang;Xiaohai He;Honggang Chen","doi":"10.1109/TBC.2024.3475820","DOIUrl":"https://doi.org/10.1109/TBC.2024.3475820","url":null,"abstract":"With the advent of image super-resolution (SR) algorithms, how to evaluate the quality of generated SR images has become an urgent task. Although full-reference methods perform well in SR image quality assessment (SR-IQA), their reliance on high-resolution (HR) images limits their practical applicability. Leveraging available reconstruction information as much as possible for SR-IQA, such as low-resolution (LR) images and the scale factors, is a promising way to enhance assessment performance for SR-IQA without HR for reference. In this paper, we attempt to evaluate the perceptual quality and reconstruction fidelity of SR images considering LR images and scale factors. Specifically, we propose a novel dual-branch reduced-reference SR-IQA network, <italic>i.e.</i>, Perception- and Fidelity-aware SR-IQA (PFIQA). The perception-aware branch evaluates the perceptual quality of SR images by leveraging the merits of global modeling of Vision Transformer (ViT) and local relation of ResNet, and incorporating the scale factor to enable comprehensive visual perception. Meanwhile, the fidelity-aware branch assesses the reconstruction fidelity between LR and SR images through their visual perception. The combination of the two branches substantially aligns with the human visual system, enabling a comprehensive SR image evaluation. Experimental results indicate that our PFIQA outperforms current state-of-the-art models across three widely-used SR-IQA benchmarks. Notably, PFIQA excels in assessing the quality of real-world SR images. Our code is available at <uri>https://github.com/xinyouu/PFIQA</uri>.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"323-333"},"PeriodicalIF":3.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No-Reference Point Cloud Quality Assessment Through Structure Sampling and Clustering Based on Graph
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-29 DOI: 10.1109/TBC.2024.3482173
Xinqiang Wu;Zhouyan He;Gangyi Jiang;Mei Yu;Yang Song;Ting Luo
As a popular multimedia representation, 3D Point Clouds (PC) inevitably encounter distortion during their acquisition, processing, coding, and transmission, resulting in visual quality degradation. Therefore, it is critical to propose a Point Cloud Quality Assessment (PCQA) method to perceive the visual quality of PC. In this paper, we propose a no-reference PCQA method through structure sampling and clustering based on graph, which consists of two-stage pre-processing, quality feature extraction, attention-based feature fusion, and feature regression. For pre-processing, considering the Human Visual System (HVS) tendency to perceive distortions in both the global structure and local details of PCs, a two-stage sampling strategy is introduced. Specifically, to adapt to the irregular structure of PCs, it introduces structural key point sampling and local cluster to capture both global and local information, respectively, thereby facilitating more effective learning of distortion features. Then, in quality feature extraction, two modules are designed based on the two-stage pre-processing results (i.e., Global Feature Extraction (GFE) and Local Feature Extraction (LFE)) to respectively extract global and local quality features. Additionally, for attention-based feature fusion, a Unified Feature Integrator (UFI) module is proposed. This module enhances quality perception capability by integrating global features and individual local quality features and introduces the Transformer to interact with the integrated quality features. Finally, feature regression is conducted to map the final features into the quality score. The performance of the proposed method is tested on four publicly available databases, and the experimental results show that the proposed method is superior compared with existing state-of-the-art no-reference PCQA methods in most cases.
{"title":"No-Reference Point Cloud Quality Assessment Through Structure Sampling and Clustering Based on Graph","authors":"Xinqiang Wu;Zhouyan He;Gangyi Jiang;Mei Yu;Yang Song;Ting Luo","doi":"10.1109/TBC.2024.3482173","DOIUrl":"https://doi.org/10.1109/TBC.2024.3482173","url":null,"abstract":"As a popular multimedia representation, 3D Point Clouds (PC) inevitably encounter distortion during their acquisition, processing, coding, and transmission, resulting in visual quality degradation. Therefore, it is critical to propose a Point Cloud Quality Assessment (PCQA) method to perceive the visual quality of PC. In this paper, we propose a no-reference PCQA method through structure sampling and clustering based on graph, which consists of two-stage pre-processing, quality feature extraction, attention-based feature fusion, and feature regression. For pre-processing, considering the Human Visual System (HVS) tendency to perceive distortions in both the global structure and local details of PCs, a two-stage sampling strategy is introduced. Specifically, to adapt to the irregular structure of PCs, it introduces structural key point sampling and local cluster to capture both global and local information, respectively, thereby facilitating more effective learning of distortion features. Then, in quality feature extraction, two modules are designed based on the two-stage pre-processing results (i.e., Global Feature Extraction (GFE) and Local Feature Extraction (LFE)) to respectively extract global and local quality features. Additionally, for attention-based feature fusion, a Unified Feature Integrator (UFI) module is proposed. This module enhances quality perception capability by integrating global features and individual local quality features and introduces the Transformer to interact with the integrated quality features. Finally, feature regression is conducted to map the final features into the quality score. The performance of the proposed method is tested on four publicly available databases, and the experimental results show that the proposed method is superior compared with existing state-of-the-art no-reference PCQA methods in most cases.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"307-322"},"PeriodicalIF":3.2,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Radio Propagation Modeling for a Cost-Effective DAB+ Service Coverage in Tunnels
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-28 DOI: 10.1109/TBC.2024.3484268
Bruno Sacco;Assunta De Vita
Providing a satisfactory coverage of Digital Audio Broadcasting (DAB+) service inside tunnels, in the VHF band, represents a very challenging task. The classic - but expensive - solution adopted so far is the use of radiating cables (“leaky feeders”) installed on the tunnel’s ceiling over its entire length. An alternative and cheaper solution, investigated in the present paper, is the so-called “direct RF radiation” approach, consisting of antennas placed inside the tunnel or just outside its entrance. A simulative analysis has been carried out in order to evaluate the impact of the design parameters, but also to serve as a tool for the estimation of the achievable service coverage. In addition, assuming a gallery to behave like a lossy waveguide, a mode analysis has been performed on the tunnel cross section, providing a fairly good estimation of the wave propagation attenuation. Interesting outcomes have been obtained from this simulative study: for instance, the behavior of the electric field as a function of distance suggests that, in the absence of geometric perturbations, the slope in the far zone is in good agreement with the attenuation value per unit distance of the main propagation mode. Curved sections cause a further attenuation which depends on the radius of curvature and the geometric dimensions of the tunnel section have a very strong impact on attenuation. Furthermore, more interesting outcomes show that, for arched tunnel sections, the fundamental propagation mode is horizontally polarized. As a result, the typical “whip” vehicular receiving antenna is not adequate: a horizontally polarized antenna would provide a much better service inside the tunnels. The investigation of the above findings have led to the set-up of a tool that can be applicable to every type of tunnel’s configuration for the verification and optimization of direct RF radiation installations for DAB/DAB+ services.
{"title":"A Radio Propagation Modeling for a Cost-Effective DAB+ Service Coverage in Tunnels","authors":"Bruno Sacco;Assunta De Vita","doi":"10.1109/TBC.2024.3484268","DOIUrl":"https://doi.org/10.1109/TBC.2024.3484268","url":null,"abstract":"Providing a satisfactory coverage of Digital Audio Broadcasting (DAB+) service inside tunnels, in the VHF band, represents a very challenging task. The classic - but expensive - solution adopted so far is the use of radiating cables (“leaky feeders”) installed on the tunnel’s ceiling over its entire length. An alternative and cheaper solution, investigated in the present paper, is the so-called “direct RF radiation” approach, consisting of antennas placed inside the tunnel or just outside its entrance. A simulative analysis has been carried out in order to evaluate the impact of the design parameters, but also to serve as a tool for the estimation of the achievable service coverage. In addition, assuming a gallery to behave like a lossy waveguide, a <italic>mode analysis</i> has been performed on the tunnel cross section, providing a fairly good estimation of the wave propagation attenuation. Interesting outcomes have been obtained from this simulative study: for instance, the behavior of the electric field as a function of distance suggests that, in the absence of geometric perturbations, the slope in the far zone is in good agreement with the attenuation value per unit distance of the main propagation mode. Curved sections cause a further attenuation which depends on the radius of curvature and the geometric dimensions of the tunnel section have a very strong impact on attenuation. Furthermore, more interesting outcomes show that, for arched tunnel sections, the fundamental propagation mode is horizontally polarized. As a result, the typical “whip” vehicular receiving antenna is not adequate: a horizontally polarized antenna would provide a much better service inside the tunnels. The investigation of the above findings have led to the set-up of a tool that can be applicable to every type of tunnel’s configuration for the verification and optimization of direct RF radiation installations for DAB/DAB+ services.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"52-62"},"PeriodicalIF":3.2,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outage Probability Analysis of Cooperative NOMA With Successive Refinement
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-25 DOI: 10.1109/TBC.2024.3477000
Meng Cheng;Yifan Zhou;Shuang Wei;Shen Qian
This paper proposes a broadcasting system with cooperative non-orthogonal multiple access (CO-NOMA) and successive refinement (SR) coding. Specifically, signals containing the basic description of the source and the refinement are overlapped at the transmitter, and broadcast to user equipment (UE) having different qualities-of-service (QoS) requirements. Although the far UEs may only be capable of decoding the basic description allocated with higher transmit power, some of them may still demand a high QoS like the near UE. To address this issue, this work utilizes the near UE to establish a relay transmission, thereby the information recovered at the far UE can be refined. Considering three different relaying schemes, the outage probabilities of the proposed system are derived in closed-form, assuming all channels suffer from block Rayleigh fading. Based on the optimal power allocations, the best scheme yielding the lowest outage probabilities is found, and the advantages over down-link NOMA with SR (DN-SR) and conventional CO-NOMA are also demonstrated.
{"title":"Outage Probability Analysis of Cooperative NOMA With Successive Refinement","authors":"Meng Cheng;Yifan Zhou;Shuang Wei;Shen Qian","doi":"10.1109/TBC.2024.3477000","DOIUrl":"https://doi.org/10.1109/TBC.2024.3477000","url":null,"abstract":"This paper proposes a broadcasting system with cooperative non-orthogonal multiple access (CO-NOMA) and successive refinement (SR) coding. Specifically, signals containing the basic description of the source and the refinement are overlapped at the transmitter, and broadcast to user equipment (UE) having different qualities-of-service (QoS) requirements. Although the far UEs may only be capable of decoding the basic description allocated with higher transmit power, some of them may still demand a high QoS like the near UE. To address this issue, this work utilizes the near UE to establish a relay transmission, thereby the information recovered at the far UE can be refined. Considering three different relaying schemes, the outage probabilities of the proposed system are derived in closed-form, assuming all channels suffer from block Rayleigh fading. Based on the optimal power allocations, the best scheme yielding the lowest outage probabilities is found, and the advantages over down-link NOMA with SR (DN-SR) and conventional CO-NOMA are also demonstrated.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"42-51"},"PeriodicalIF":3.2,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Broadcasting
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1