首页 > 最新文献

IEEE Transactions on Broadcasting最新文献

英文 中文
IEEE Transactions on Broadcasting Publication Information
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-03-05 DOI: 10.1109/TBC.2025.3542624
{"title":"IEEE Transactions on Broadcasting Publication Information","authors":"","doi":"10.1109/TBC.2025.3542624","DOIUrl":"https://doi.org/10.1109/TBC.2025.3542624","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"C2-C2"},"PeriodicalIF":3.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10913473","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Broadcasting Information for Authors
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-03-05 DOI: 10.1109/TBC.2025.3542626
{"title":"IEEE Transactions on Broadcasting Information for Authors","authors":"","doi":"10.1109/TBC.2025.3542626","DOIUrl":"https://doi.org/10.1109/TBC.2025.3542626","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"C3-C4"},"PeriodicalIF":3.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10913472","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TV 3.0: An Overview
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-01 DOI: 10.1109/TBC.2024.3511928
Allan Seiti Sassaqui Chaubet;Rodrigo Admir Vaz;George Henrique Maranhão Garcia de Oliveira;Ricardo Seriacopi Rabaça;Isabela Coelho Dourado;Gustavo de Melo Valeira;Cristiano Akamine
A new Digital Terrestrial Television Broadcasting (DTTB) system, called Television (TV) 3.0, is being developed in Brazil and is expected to be on air by 2025 under the commercial name DTV+. It started with a Call for Proposals (CfP) for its systems components, for which organizations worldwide have submitted candidate technologies. After two testing and evaluation phases, the technologies for all layers were selected, the TV 3.0 architecture was completely defined, and the standards were written. It consists of modern Modulation and Code (MODCOD) techniques, mandatory transmission and reception in Multiple-Input Multiple-Output (MIMO) with cross-polarized antennas, an app-oriented interface, an Internet-based Transport Layer (TL), and state-of-the-art efficient coding for audio, video, and captions. This set of technologies will allow for several new use cases that change the user experience with TV, such as Geographically Segmented Broadcasting (GSB), targeted advertising, sensory effects, and interactivity. This paper reviews the phases already concluded for the TV 3.0 project and presents its potentialities and the current developments at its final stage.
{"title":"TV 3.0: An Overview","authors":"Allan Seiti Sassaqui Chaubet;Rodrigo Admir Vaz;George Henrique Maranhão Garcia de Oliveira;Ricardo Seriacopi Rabaça;Isabela Coelho Dourado;Gustavo de Melo Valeira;Cristiano Akamine","doi":"10.1109/TBC.2024.3511928","DOIUrl":"https://doi.org/10.1109/TBC.2024.3511928","url":null,"abstract":"A new Digital Terrestrial Television Broadcasting (DTTB) system, called Television (TV) 3.0, is being developed in Brazil and is expected to be on air by 2025 under the commercial name DTV+. It started with a Call for Proposals (CfP) for its systems components, for which organizations worldwide have submitted candidate technologies. After two testing and evaluation phases, the technologies for all layers were selected, the TV 3.0 architecture was completely defined, and the standards were written. It consists of modern Modulation and Code (MODCOD) techniques, mandatory transmission and reception in Multiple-Input Multiple-Output (MIMO) with cross-polarized antennas, an app-oriented interface, an Internet-based Transport Layer (TL), and state-of-the-art efficient coding for audio, video, and captions. This set of technologies will allow for several new use cases that change the user experience with TV, such as Geographically Segmented Broadcasting (GSB), targeted advertising, sensory effects, and interactivity. This paper reviews the phases already concluded for the TV 3.0 project and presents its potentialities and the current developments at its final stage.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"11-18"},"PeriodicalIF":3.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Entity Management Methodology for Digital Twin Implementation: Concept, Definition, and Examples
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-01-01 DOI: 10.1109/TBC.2024.3517138
Yegi Lee;Myung-Sun Baek;Kyoungro Yoon
Many efforts to achieve cost savings through simulations have been ongoing in the cyber-physical system (CPS) industry and manufacturing field. Recently, the concept of digital twins has emerged as a promising solution for cost reduction in various fields, such as smart cities, factory optimization, architecture, and manufacturing. Digital twins offer enormous potential by continuously monitoring and updating data to study a wide range of issues and improve products and processes. However, the practical implementation of digital twins presents significant challenges. Additionally, while various studies have introduced the concepts and roles of digital twin systems and digital components, further research is needed to explore efficient operation and management strategies. This paper aims to present digital entity management methodology for the efficient implementation of digital twin systems. Our proposed class-level digital entity management methodology constructs complex and repetitively used digital entities into digital entity classes. This approach facilitates the abstraction, inheritance, and upcasting of digital entity classes. By leveraging class-level management and easily reusable and modifiable digital entities, the implementation of low-complexity digital twin systems becomes feasible. The proposed methodology aims to streamline the digital twin implementation process, addressing complex technical integration and practical implementation challenges.
{"title":"Digital Entity Management Methodology for Digital Twin Implementation: Concept, Definition, and Examples","authors":"Yegi Lee;Myung-Sun Baek;Kyoungro Yoon","doi":"10.1109/TBC.2024.3517138","DOIUrl":"https://doi.org/10.1109/TBC.2024.3517138","url":null,"abstract":"Many efforts to achieve cost savings through simulations have been ongoing in the cyber-physical system (CPS) industry and manufacturing field. Recently, the concept of digital twins has emerged as a promising solution for cost reduction in various fields, such as smart cities, factory optimization, architecture, and manufacturing. Digital twins offer enormous potential by continuously monitoring and updating data to study a wide range of issues and improve products and processes. However, the practical implementation of digital twins presents significant challenges. Additionally, while various studies have introduced the concepts and roles of digital twin systems and digital components, further research is needed to explore efficient operation and management strategies. This paper aims to present digital entity management methodology for the efficient implementation of digital twin systems. Our proposed class-level digital entity management methodology constructs complex and repetitively used digital entities into digital entity classes. This approach facilitates the abstraction, inheritance, and upcasting of digital entity classes. By leveraging class-level management and easily reusable and modifiable digital entities, the implementation of low-complexity digital twin systems becomes feasible. The proposed methodology aims to streamline the digital twin implementation process, addressing complex technical integration and practical implementation challenges.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"19-29"},"PeriodicalIF":3.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Coupling Strategy and Improved BFGS-Based Advanced Rate Control for VVC
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-31 DOI: 10.1109/TBC.2024.3517167
Jiahao Zhang;Shuhua Xiong;Xiaohai He;Zeming Zhao;Hongdong Qin
This paper presents an advanced rate control (ARC) algorithm for Versatile Video Coding (VVC). The proposed method is based on spatial coupling strategy and improved Broyden Fletcher Goldfarb Shanno (BFGS) algorithm to achieve a high performance rate control (RC). In this paper, we address the problem that the current coding block does not fully utilise the spatial information during the encoding process. Firstly, a parameter updating strategy at the coding tree unit (CTU) level is constructed based on spatial coupling strategy. The spatial coupling strategy established the relationship between video parameters and video texture, which enables the video parameters at the CTU level to be more closely aligned with the video content. Furthermore, in order to enhance the precision of RC, we have proposed an improved BFGS algorithm to update video parameters, which utilizes the optimal search direction of the different partial differentials and sets an adaptive speed control factor. The experimental results indicate that the proposed method offers better performance compared to the default RC in VVC Test Moder (VTM) 19.0, with Bjøntegaard Delta Rate (BD-Rate) savings of 6.35%, 5.09% and 5.43% under Low Delay P, Low Delay B and Random Access configurations, respectively. Moreover, the proposed method demonstrates superior performance compared to other state-of-the-art algorithms.
{"title":"Spatial Coupling Strategy and Improved BFGS-Based Advanced Rate Control for VVC","authors":"Jiahao Zhang;Shuhua Xiong;Xiaohai He;Zeming Zhao;Hongdong Qin","doi":"10.1109/TBC.2024.3517167","DOIUrl":"https://doi.org/10.1109/TBC.2024.3517167","url":null,"abstract":"This paper presents an advanced rate control (ARC) algorithm for Versatile Video Coding (VVC). The proposed method is based on spatial coupling strategy and improved Broyden Fletcher Goldfarb Shanno (BFGS) algorithm to achieve a high performance rate control (RC). In this paper, we address the problem that the current coding block does not fully utilise the spatial information during the encoding process. Firstly, a parameter updating strategy at the coding tree unit (CTU) level is constructed based on spatial coupling strategy. The spatial coupling strategy established the relationship between video parameters and video texture, which enables the video parameters at the CTU level to be more closely aligned with the video content. Furthermore, in order to enhance the precision of RC, we have proposed an improved BFGS algorithm to update video parameters, which utilizes the optimal search direction of the different partial differentials and sets an adaptive speed control factor. The experimental results indicate that the proposed method offers better performance compared to the default RC in VVC Test Moder (VTM) 19.0, with Bjøntegaard Delta Rate (BD-Rate) savings of 6.35%, 5.09% and 5.43% under Low Delay P, Low Delay B and Random Access configurations, respectively. Moreover, the proposed method demonstrates superior performance compared to other state-of-the-art algorithms.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"111-124"},"PeriodicalIF":3.2,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalizable Underwater Image Quality Assessment With Curriculum Learning-Inspired Domain Adaption
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-27 DOI: 10.1109/TBC.2024.3511962
Shihui Wu;Qiuping Jiang;Guanghui Yue;Shiqi Wang;Guangtao Zhai
The complex distortions suffered by real-world underwater images pose urgent demands on accurate underwater image quality assessment (UIQA) approaches that can predict underwater image quality consistently with human perception. Deep learning techniques have achieved great success in many applications, yet usually requiring a substantial amount of human-labeled data, which is time-consuming and labor-intensive. Developing a deep learning-based UIQA method that does not rely on any human labeled underwater images for model training poses a great challenge. In this work, we propose a novel UIQA method based on domain adaption (DA) from a curriculum learning perspective. The proposed method is called curriculum learning-inspired DA (CLIDA), aiming to learn an robust and generalizable UIQA model by conducting DA between the labeled natural images and unlabeled underwater images progressively, i.e., from easy to hard. The key is how to select easy samples from all underwater images in the target domain so that the difficulty of DA can be well-controlled at each stage. To this end, we propose a simple yet effective easy sample selection (ESS) scheme to form an easy sample set at each stage. Then, DA is performed between the entire natural image set in the source domain (with labels) and the selected easy sample set in the target domain (with pseudo labels) at each stage. As only those reliable easy examples are involved in DA at each stage, the difficulty of DA is well-controlled and the capability of the model is expected to be progressively enhanced. We conduct extensive experiments to verify the superiority of the proposed CLIDA method and also the effectiveness of each key component involved in our CLIDA framework. The source code will be made available at https://github.com/zzeu001/CLIDA.
{"title":"Generalizable Underwater Image Quality Assessment With Curriculum Learning-Inspired Domain Adaption","authors":"Shihui Wu;Qiuping Jiang;Guanghui Yue;Shiqi Wang;Guangtao Zhai","doi":"10.1109/TBC.2024.3511962","DOIUrl":"https://doi.org/10.1109/TBC.2024.3511962","url":null,"abstract":"The complex distortions suffered by real-world underwater images pose urgent demands on accurate underwater image quality assessment (UIQA) approaches that can predict underwater image quality consistently with human perception. Deep learning techniques have achieved great success in many applications, yet usually requiring a substantial amount of human-labeled data, which is time-consuming and labor-intensive. Developing a deep learning-based UIQA method that does not rely on any human labeled underwater images for model training poses a great challenge. In this work, we propose a novel UIQA method based on domain adaption (DA) from a curriculum learning perspective. The proposed method is called curriculum learning-inspired DA (CLIDA), aiming to learn an robust and generalizable UIQA model by conducting DA between the labeled natural images and unlabeled underwater images progressively, i.e., from easy to hard. The key is how to select easy samples from all underwater images in the target domain so that the difficulty of DA can be well-controlled at each stage. To this end, we propose a simple yet effective easy sample selection (ESS) scheme to form an easy sample set at each stage. Then, DA is performed between the entire natural image set in the source domain (with labels) and the selected easy sample set in the target domain (with pseudo labels) at each stage. As only those reliable easy examples are involved in DA at each stage, the difficulty of DA is well-controlled and the capability of the model is expected to be progressively enhanced. We conduct extensive experiments to verify the superiority of the proposed CLIDA method and also the effectiveness of each key component involved in our CLIDA framework. The source code will be made available at <uri>https://github.com/zzeu001/CLIDA</uri>.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"252-263"},"PeriodicalIF":3.2,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLIPVQA: Video Quality Assessment via CLIP
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-27 DOI: 10.1109/TBC.2024.3511927
Fengchuang Xing;Mingjie Li;Yuan-Gen Wang;Guopu Zhu;Xiaochun Cao
In learning vision-language representations from Web-scale data, the contrastive language-image pre-training (CLIP) mechanism has demonstrated a remarkable performance in many vision tasks. However, its application to the widely studied video quality assessment (VQA) task is still an open issue. In this paper, we propose an efficient and effective CLIP-based Transformer method for the VQA problem (CLIPVQA). Specifically, we first design an effective video frame perception paradigm with the goal of extracting the rich spatiotemporal quality and content information among video frames. Then, the spatiotemporal quality features are adequately integrated together using a self-attention mechanism to yield video-level quality representation. To utilize the quality language descriptions of videos for supervision, we develop a CLIP-based encoder for language embedding, which is then fully aggregated with the generated content information via a cross-attention module for producing video-language representation. Finally, the video-level quality and video-language representations are fused together for final video quality prediction, where a vectorized regression loss is employed for efficient end-to-end optimization. Comprehensive experiments are conducted on eight in-the-wild video datasets with diverse resolutions to evaluate the performance of CLIPVQA. The experimental results show that the proposed CLIPVQA achieves new state-of-the-art VQA performance and up to 37% better generalizability than existing benchmark VQA methods. A series of ablation studies are also performed to validate the effectiveness of each module in CLIPVQA.
{"title":"CLIPVQA: Video Quality Assessment via CLIP","authors":"Fengchuang Xing;Mingjie Li;Yuan-Gen Wang;Guopu Zhu;Xiaochun Cao","doi":"10.1109/TBC.2024.3511927","DOIUrl":"https://doi.org/10.1109/TBC.2024.3511927","url":null,"abstract":"In learning vision-language representations from Web-scale data, the contrastive language-image pre-training (CLIP) mechanism has demonstrated a remarkable performance in many vision tasks. However, its application to the widely studied video quality assessment (VQA) task is still an open issue. In this paper, we propose an efficient and effective CLIP-based Transformer method for the VQA problem (CLIPVQA). Specifically, we first design an effective video frame perception paradigm with the goal of extracting the rich spatiotemporal quality and content information among video frames. Then, the spatiotemporal quality features are adequately integrated together using a self-attention mechanism to yield video-level quality representation. To utilize the quality language descriptions of videos for supervision, we develop a CLIP-based encoder for language embedding, which is then fully aggregated with the generated content information via a cross-attention module for producing video-language representation. Finally, the video-level quality and video-language representations are fused together for final video quality prediction, where a vectorized regression loss is employed for efficient end-to-end optimization. Comprehensive experiments are conducted on eight in-the-wild video datasets with diverse resolutions to evaluate the performance of CLIPVQA. The experimental results show that the proposed CLIPVQA achieves new state-of-the-art VQA performance and up to 37% better generalizability than existing benchmark VQA methods. A series of ablation studies are also performed to validate the effectiveness of each module in CLIPVQA.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"291-306"},"PeriodicalIF":3.2,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distortion Propagation Model-Based V-PCC Rate Control for 3D Point Cloud Broadcasting
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-12 DOI: 10.1109/TBC.2024.3511950
Zhanyuan Cai;Wenxu Gao;Ge Li;Wei Gao
For efficient point cloud broadcasting, point cloud compression technologies serve as the foundation, which plays a crucial role in immersive media communication and streaming. Video-based point cloud compression (V-PCC) is the recently developed standard by the Moving Picture Experts Group (MPEG) for dynamic point clouds. Its original fixed-ratio bit allocation (FR-BA) method in the unique all intra (AI) structure leads to a significant rate-distortion performance gap between the rate control manner and the fixed quantization parameters (FixedQP) scheme, as evidenced by significant increases in BD-Rate (Bjøntegaard Delta Rate) for both geometry and attribute. To address this issue, we propose a distortion propagation model-based frame-level bit allocation method that is specifically tailored for AI structure in V-PCC. First, the analysis is carried out for the distortion propagation model inside the group of pictures (GOP) for the AI configuration. Second, the skip ratio of 4x4 minimum coding units (CUs) is utilized to predict the distortion propagation factor. Third, the occupancy information is employed to refine the distortion propagation model and further enhance compression performance. Finally, experimental results demonstrate the effectiveness of the proposed distortion propagation model-based frame-level bit allocation method. Specifically, experimental results reveal that the proposed method achieves BD-Rate reductions of 0.92% and 4.85% in geometry and attribute, respectively, compared to the FR-BA method. Furthermore, with the introduction of distortion propagation factor prediction incorporating occupancy correction, the BD-Rate reductions are further extended to 2.16% and 6.13% in geometry and attribute, respectively.
{"title":"Distortion Propagation Model-Based V-PCC Rate Control for 3D Point Cloud Broadcasting","authors":"Zhanyuan Cai;Wenxu Gao;Ge Li;Wei Gao","doi":"10.1109/TBC.2024.3511950","DOIUrl":"https://doi.org/10.1109/TBC.2024.3511950","url":null,"abstract":"For efficient point cloud broadcasting, point cloud compression technologies serve as the foundation, which plays a crucial role in immersive media communication and streaming. Video-based point cloud compression (V-PCC) is the recently developed standard by the Moving Picture Experts Group (MPEG) for dynamic point clouds. Its original fixed-ratio bit allocation (FR-BA) method in the unique all intra (AI) structure leads to a significant rate-distortion performance gap between the rate control manner and the fixed quantization parameters (FixedQP) scheme, as evidenced by significant increases in BD-Rate (Bjøntegaard Delta Rate) for both geometry and attribute. To address this issue, we propose a distortion propagation model-based frame-level bit allocation method that is specifically tailored for AI structure in V-PCC. First, the analysis is carried out for the distortion propagation model inside the group of pictures (GOP) for the AI configuration. Second, the skip ratio of 4x4 minimum coding units (CUs) is utilized to predict the distortion propagation factor. Third, the occupancy information is employed to refine the distortion propagation model and further enhance compression performance. Finally, experimental results demonstrate the effectiveness of the proposed distortion propagation model-based frame-level bit allocation method. Specifically, experimental results reveal that the proposed method achieves BD-Rate reductions of 0.92% and 4.85% in geometry and attribute, respectively, compared to the FR-BA method. Furthermore, with the introduction of distortion propagation factor prediction incorporating occupancy correction, the BD-Rate reductions are further extended to 2.16% and 6.13% in geometry and attribute, respectively.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"180-192"},"PeriodicalIF":3.2,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rate-Compatible Length-Scalable Quasi-Cyclic Spatially-Coupled LDPC Codes
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-12 DOI: 10.1109/TBC.2024.3511916
Zhitong He;Kewu Peng;Jian Song
The capability of QC-SC-LDPC codes to be employed in broadcasting systems has been studied in previous research. However, the implementation-oriented features such as rate-compatibility and length-scalability for QC-SC-LDPC codes have not been well studied yet. In this paper, we first propose a new implementation-oriented structure of QC-SC-LDPC codes for broadcasting systems, with support for rate-compatibility and length-scalability. Then, the three-dimensional (3D-) grid-based (G-) progressive edge growth and lifting (PEGL) method is proposed to construct QC-SC-LDPC codes with that structure, which can achieve desirable performance across different code rates and code lengths within the given design complexity. Finally, a family of rate-compatible length-scalable QC-SC-LDPC codes are constructed via the 3D-G-PEGL method, and simulation results demonstrate the effectiveness of that method. Furthermore, the scaling behaviors of QC-SC-LDPC codes are observed from the provided simulation results.
{"title":"Rate-Compatible Length-Scalable Quasi-Cyclic Spatially-Coupled LDPC Codes","authors":"Zhitong He;Kewu Peng;Jian Song","doi":"10.1109/TBC.2024.3511916","DOIUrl":"https://doi.org/10.1109/TBC.2024.3511916","url":null,"abstract":"The capability of QC-SC-LDPC codes to be employed in broadcasting systems has been studied in previous research. However, the implementation-oriented features such as rate-compatibility and length-scalability for QC-SC-LDPC codes have not been well studied yet. In this paper, we first propose a new implementation-oriented structure of QC-SC-LDPC codes for broadcasting systems, with support for rate-compatibility and length-scalability. Then, the three-dimensional (3D-) grid-based (G-) progressive edge growth and lifting (PEGL) method is proposed to construct QC-SC-LDPC codes with that structure, which can achieve desirable performance across different code rates and code lengths within the given design complexity. Finally, a family of rate-compatible length-scalable QC-SC-LDPC codes are constructed via the 3D-G-PEGL method, and simulation results demonstrate the effectiveness of that method. Furthermore, the scaling behaviors of QC-SC-LDPC codes are observed from the provided simulation results.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"71 1","pages":"81-95"},"PeriodicalIF":3.2,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Broadcasting Publication Information IEEE广播出版信息汇刊
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-12-11 DOI: 10.1109/TBC.2024.3495315
{"title":"IEEE Transactions on Broadcasting Publication Information","authors":"","doi":"10.1109/TBC.2024.3495315","DOIUrl":"https://doi.org/10.1109/TBC.2024.3495315","url":null,"abstract":"","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 4","pages":"C2-C2"},"PeriodicalIF":3.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10791069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Broadcasting
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1