首页 > 最新文献

IEEE Transactions on Broadcasting最新文献

英文 中文
Transformer-Based Light Field Geometry Learning for No-Reference Light Field Image Quality Assessment 基于变换器的光场几何学习,用于无参考光场图像质量评估
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-31 DOI: 10.1109/TBC.2024.3353579
Lili Lin;Siyu Bai;Mengjia Qu;Xuehui Wei;Luyao Wang;Feifan Wu;Biao Liu;Wenhui Zhou;Ercan Engin Kuruoglu
Elevating traditional 2-dimensional (2D) plane display to 4-dimensional (4D) light field display can significantly enhance users’ immersion and realism, because light field image (LFI) provides various visual cues in terms of multi-view disparity, motion disparity, and selective focus. Therefore, it is crucial to establish a light field image quality assessment (LF-IQA) model that aligns with human visual perception characteristics. However, it has always been a challenge to evaluate the perceptual quality of multiple light field visual cues simultaneously and consistently. To this end, this paper proposes a Transformer-based explicit learning of light field geometry for the no-reference light field image quality assessment. Specifically, to explicitly learn the light field epipolar geometry, we stack up light field sub-aperture images (SAIs) to form four SAI stacks according to four specific light field angular directions, and use a sub-grouping strategy to hierarchically learn the local and global light field geometric features. Then, a Transformer encoder with a spatial-shift tokenization strategy is applied to learn structure-aware light field geometric distortion representation, which is used to regress the final quality score. Evaluation experiments are carried out on three commonly used light field image quality assessment datasets: Win5-LID, NBU-LF1.0, and MPI-LFA. Experimental results demonstrate that our model outperforms state-of-the-art methods and exhibits a high correlation with human perception. The source code is publicly available at https://github.com/windyz77/GeoNRLFIQA.
将传统的二维(2D)平面显示提升到四维(4D)光场显示,可以显著增强用户的沉浸感和真实感,因为光场图像(LFI)提供了多视角差异、运动差异和选择性聚焦等多种视觉线索。因此,建立一个符合人类视觉感知特征的光场图像质量评估(LF-IQA)模型至关重要。然而,如何同时、一致地评估多个光场视觉线索的感知质量一直是个难题。为此,本文提出了一种基于变换器的光场几何显式学习方法,用于无参照光场图像质量评估。具体来说,为了显式学习光场外极几何,我们将光场子孔径图像(SAI)按照四个特定的光场角度方向堆叠成四个 SAI 堆栈,并使用子分组策略分层学习局部和全局光场几何特征。然后,采用空间偏移标记化策略的变换器编码器学习结构感知光场几何失真表示,并以此回归最终质量得分。评估实验在三个常用的光场图像质量评估数据集上进行:Win5-LID、NBU-LF1.0 和 MPI-LFA。实验结果表明,我们的模型优于最先进的方法,并且与人类感知具有很高的相关性。源代码可通过 https://github.com/windyz77/GeoNRLFIQA 公开获取。
{"title":"Transformer-Based Light Field Geometry Learning for No-Reference Light Field Image Quality Assessment","authors":"Lili Lin;Siyu Bai;Mengjia Qu;Xuehui Wei;Luyao Wang;Feifan Wu;Biao Liu;Wenhui Zhou;Ercan Engin Kuruoglu","doi":"10.1109/TBC.2024.3353579","DOIUrl":"10.1109/TBC.2024.3353579","url":null,"abstract":"Elevating traditional 2-dimensional (2D) plane display to 4-dimensional (4D) light field display can significantly enhance users’ immersion and realism, because light field image (LFI) provides various visual cues in terms of multi-view disparity, motion disparity, and selective focus. Therefore, it is crucial to establish a light field image quality assessment (LF-IQA) model that aligns with human visual perception characteristics. However, it has always been a challenge to evaluate the perceptual quality of multiple light field visual cues simultaneously and consistently. To this end, this paper proposes a Transformer-based explicit learning of light field geometry for the no-reference light field image quality assessment. Specifically, to explicitly learn the light field epipolar geometry, we stack up light field sub-aperture images (SAIs) to form four SAI stacks according to four specific light field angular directions, and use a sub-grouping strategy to hierarchically learn the local and global light field geometric features. Then, a Transformer encoder with a spatial-shift tokenization strategy is applied to learn structure-aware light field geometric distortion representation, which is used to regress the final quality score. Evaluation experiments are carried out on three commonly used light field image quality assessment datasets: Win5-LID, NBU-LF1.0, and MPI-LFA. Experimental results demonstrate that our model outperforms state-of-the-art methods and exhibits a high correlation with human perception. The source code is publicly available at \u0000<uri>https://github.com/windyz77/GeoNRLFIQA</uri>\u0000.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"597-606"},"PeriodicalIF":4.5,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High Accuracy Channel Estimation With TxID Sequence in ATSC 3.0 SFN 利用 ATSC 3.0 SFN 中的 TxID 序列进行高精度信道估计
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-30 DOI: 10.1109/TBC.2024.3353577
Zhihong Hunter Hong;Yiyan Wu;Wei Li;Liang Zhang;Zhiwen Zhu;Sung-Ik Park;Namho Hur;Eneko Iradier;Jon Montalban
Inter-tower communications networks (ITCN) and wireless in-band distribution links (IDL) reuse the same broadcast spectrum for establishing communications links between transmitter towers and for wireless backhaul by multiplexing the ITCN/IDL signals with the broadcast signal into a frame for transmission. In single-frequency networks (SFN) environment, where all the transmitter towers broadcast the same preamble and in-band pilots for improving TV coverage and received signal strength, receiving the desired ITCN/IDL signal from a specific transmitter is challenging with the conventional channel estimation techniques. In the Advanced Television Standard Committee (ATSC) 3.0, one unique transmitter identification (TxID) sequence, a spread sequence overlaid with the preamble signal, is assigned for each transmitter for the purpose of SFN planning and synchronization. By using the TxID sequence, the channel estimation of a specific transmitter becomes feasible. However, the accuracy of existing TxID-based channel estimation is limited due to interferences from the preamble signal and the co-channel TxIDs, as well as the non-orthogonality of the TxID sequence. Several high-accuracy channel estimation schemes based on the TxID sequence are proposed in this paper, which enable IDL and ITCN with very high data rate transmission, e.g., 1024 QAM modulation.
发射塔间通信网络(ITCN)和无线带内分配链路(IDL)重复使用相同的广播频谱,通过将 ITCN/IDL 信号与广播信号复用到一个帧中进行传输,从而建立发射塔之间的通信链路和无线回程。在单频网络(SFN)环境中,所有发射塔都播放相同的前导信号和带内先导信号,以提高电视覆盖率和接收信号强度,因此使用传统信道估计技术从特定发射塔接收所需的 ITCN/IDL 信号具有挑战性。在高级电视标准委员会(ATSC)3.0 中,为每个发射机分配了一个唯一的发射机识别(TxID)序列,即与前导信号重叠的扩频序列,用于 SFN 规划和同步。通过使用 TxID 序列,可以对特定发射机进行信道估计。然而,由于前导信号和同信道 TxID 的干扰,以及 TxID 序列的非正交性,现有基于 TxID 的信道估计精度有限。本文提出了几种基于 TxID 序列的高精度信道估计方案,可实现 IDL 和 ITCN 的超高数据速率传输,如 1024 QAM 调制。
{"title":"High Accuracy Channel Estimation With TxID Sequence in ATSC 3.0 SFN","authors":"Zhihong Hunter Hong;Yiyan Wu;Wei Li;Liang Zhang;Zhiwen Zhu;Sung-Ik Park;Namho Hur;Eneko Iradier;Jon Montalban","doi":"10.1109/TBC.2024.3353577","DOIUrl":"10.1109/TBC.2024.3353577","url":null,"abstract":"Inter-tower communications networks (ITCN) and wireless in-band distribution links (IDL) reuse the same broadcast spectrum for establishing communications links between transmitter towers and for wireless backhaul by multiplexing the ITCN/IDL signals with the broadcast signal into a frame for transmission. In single-frequency networks (SFN) environment, where all the transmitter towers broadcast the same preamble and in-band pilots for improving TV coverage and received signal strength, receiving the desired ITCN/IDL signal from a specific transmitter is challenging with the conventional channel estimation techniques. In the Advanced Television Standard Committee (ATSC) 3.0, one unique transmitter identification (TxID) sequence, a spread sequence overlaid with the preamble signal, is assigned for each transmitter for the purpose of SFN planning and synchronization. By using the TxID sequence, the channel estimation of a specific transmitter becomes feasible. However, the accuracy of existing TxID-based channel estimation is limited due to interferences from the preamble signal and the co-channel TxIDs, as well as the non-orthogonality of the TxID sequence. Several high-accuracy channel estimation schemes based on the TxID sequence are proposed in this paper, which enable IDL and ITCN with very high data rate transmission, e.g., 1024 QAM modulation.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"391-400"},"PeriodicalIF":4.5,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Occupancy-Assisted Attribute Artifact Reduction for Video-Based Point Cloud Compression 在基于视频的点云压缩中减少占用辅助属性伪影
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-30 DOI: 10.1109/TBC.2024.3353568
Linyao Gao;Zhu Li;Lizhi Hou;Yiling Xu;Jun Sun
Video-based point cloud compression (V-PCC) has achieved remarkable compression efficiency, which converts point clouds into videos and leverages video codecs for coding. For lossy compression, the undesirable artifacts of attribute images always degrade the point clouds attribute reconstruction quality. In this paper, we propose an Occupancy-assisted Compression Artifact Removal Network (OCARNet) to remove the distortions of V-PCC decoded attribute images for high-quality point cloud attribute reconstruction. Specifically, the occupancy information is fed into network as a prior knowledge to provide more spatial and structural information and to assist in eliminating the distortions of the texture regions. To aggregate the occupancy information effectively, we design a multi-level feature fusion framework with Channel-Spatial Attention based Residual Blocks (CSARB), where the short and long residual connections are jointly employed to capture the local context and long-range dependency. Besides, we propose a Masked Mean Square Error (MMSE) loss function based on the occupancy information to train our proposed network to focus on estimating the attribute artifacts of the occupied regions. To the best of our knowledge, this is the first learning-based attribute artifact removal method for V-PCC. Experimental results demonstrate that our framework outperforms existing state-of-the-art methods and shows the effectiveness on both objective and subjective quality comparisons.
基于视频的点云压缩(V-PCC)将点云转换为视频,并利用视频编解码器进行编码,从而实现了显著的压缩效率。对于有损压缩,属性图像的不良伪影总是会降低点云属性重建的质量。本文提出了一种占位辅助压缩伪影去除网络(OCARNet)来去除 V-PCC 解码属性图像的失真,从而实现高质量的点云属性重建。具体来说,将占位信息作为先验知识输入网络,以提供更多的空间和结构信息,并帮助消除纹理区域的失真。为了有效地聚合占位信息,我们设计了一种基于通道-空间注意力残差块(CSARB)的多层次特征融合框架,其中长短残差连接被联合使用,以捕捉局部上下文和长程依赖性。此外,我们还提出了一种基于占用信息的掩蔽均方误差(MMSE)损失函数,用于训练我们提出的网络,以集中估计占用区域的属性假象。据我们所知,这是第一种基于学习的 V-PCC 属性伪影去除方法。实验结果表明,我们的框架优于现有的最先进方法,并在客观和主观质量比较中显示出其有效性。
{"title":"Occupancy-Assisted Attribute Artifact Reduction for Video-Based Point Cloud Compression","authors":"Linyao Gao;Zhu Li;Lizhi Hou;Yiling Xu;Jun Sun","doi":"10.1109/TBC.2024.3353568","DOIUrl":"10.1109/TBC.2024.3353568","url":null,"abstract":"Video-based point cloud compression (V-PCC) has achieved remarkable compression efficiency, which converts point clouds into videos and leverages video codecs for coding. For lossy compression, the undesirable artifacts of attribute images always degrade the point clouds attribute reconstruction quality. In this paper, we propose an Occupancy-assisted Compression Artifact Removal Network (OCARNet) to remove the distortions of V-PCC decoded attribute images for high-quality point cloud attribute reconstruction. Specifically, the occupancy information is fed into network as a prior knowledge to provide more spatial and structural information and to assist in eliminating the distortions of the texture regions. To aggregate the occupancy information effectively, we design a multi-level feature fusion framework with Channel-Spatial Attention based Residual Blocks (CSARB), where the short and long residual connections are jointly employed to capture the local context and long-range dependency. Besides, we propose a Masked Mean Square Error (MMSE) loss function based on the occupancy information to train our proposed network to focus on estimating the attribute artifacts of the occupied regions. To the best of our knowledge, this is the first learning-based attribute artifact removal method for V-PCC. Experimental results demonstrate that our framework outperforms existing state-of-the-art methods and shows the effectiveness on both objective and subjective quality comparisons.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"667-680"},"PeriodicalIF":4.5,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Access Optimization in 802.11ax WLAN for Load Balancing and Competition Avoidance of IPTV Traffic 在 802.11ax WLAN 中优化接入,实现负载平衡并避免 IPTV 流量竞争
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-25 DOI: 10.1109/TBC.2024.3349768
Sujie Shao;Linlin Zhang;Fei Qi
With the improvement of terminal intelligence and the enrichment of digital content, terminal density is showing an explosive growth trend, and the traffic carried by IPTV and other services is rapidly increasing. HDHB WLAN (High-Density High-Bandwidth Wireless LAN) is becoming a dominant form of wireless LAN. However, the RSSI-based access mode has led to a notable load imbalance, and the resource competition mode based on random access intensifies the difficulty of access resource acquisition, which exacerbates the traffic challenges faced by WLAN. IEEE 802.11ax somewhat alleviates traffic pressure, but it does not fundamentally solve these problems. This paper introduces an access optimization mechanism for the 802.11ax HDHB WLAN, which aims to achieve load balancing while considering competition avoidance, alleviating the pressure of IPTV traffic. First, an 802.11ax access optimization architecture for HDHB WLAN is constructed, aimed at alleviating traffic pressure and meeting the quality requirements of IPTV and other services by modifying access processes of terminals. Next, a terminal information acquisition and interactive access control strategy based on the trigger frame is devised to obtain accurate parameter information and facilitate orderly concurrent access control for high-density terminals. Additionally, a load balancing and competition avoidance oriented access control method for HDHB WLAN is proposed, including an access optimization control model, and an access strategy generation algorithm based on the Improved DQN algorithm. Finally, simulation results show that the global throughput and load balancing of HDHB WLAN are improved, consequently reducing overall WLAN traffic pressure.
随着终端智能化水平的提高和数字内容的丰富,终端密度呈现爆发式增长趋势,IPTV 等业务承载的流量也在迅速增加。HDHB WLAN(高密度高带宽无线局域网)正在成为无线局域网的主流形式。然而,基于 RSSI 的接入模式导致了明显的负载不平衡,基于随机接入的资源竞争模式加剧了接入资源获取的难度,从而加剧了无线局域网面临的流量挑战。IEEE 802.11ax 在一定程度上缓解了流量压力,但并没有从根本上解决这些问题。本文介绍了一种针对 802.11ax HDHB WLAN 的接入优化机制,旨在实现负载均衡,同时考虑避免竞争,缓解 IPTV 流量压力。首先,构建了针对 HDHB WLAN 的 802.11ax 接入优化架构,旨在通过修改终端接入流程,缓解流量压力,满足 IPTV 等业务的质量要求。接着,设计了基于触发帧的终端信息获取和交互式接入控制策略,以获取准确的参数信息,促进高密度终端的有序并发接入控制。此外,还提出了一种面向 HDHB WLAN 的负载均衡和避免竞争的接入控制方法,包括接入优化控制模型和基于改进 DQN 算法的接入策略生成算法。最后,仿真结果表明,HDHB WLAN 的全局吞吐量和负载平衡得到了改善,从而减轻了整个 WLAN 的流量压力。
{"title":"Access Optimization in 802.11ax WLAN for Load Balancing and Competition Avoidance of IPTV Traffic","authors":"Sujie Shao;Linlin Zhang;Fei Qi","doi":"10.1109/TBC.2024.3349768","DOIUrl":"10.1109/TBC.2024.3349768","url":null,"abstract":"With the improvement of terminal intelligence and the enrichment of digital content, terminal density is showing an explosive growth trend, and the traffic carried by IPTV and other services is rapidly increasing. HDHB WLAN (High-Density High-Bandwidth Wireless LAN) is becoming a dominant form of wireless LAN. However, the RSSI-based access mode has led to a notable load imbalance, and the resource competition mode based on random access intensifies the difficulty of access resource acquisition, which exacerbates the traffic challenges faced by WLAN. IEEE 802.11ax somewhat alleviates traffic pressure, but it does not fundamentally solve these problems. This paper introduces an access optimization mechanism for the 802.11ax HDHB WLAN, which aims to achieve load balancing while considering competition avoidance, alleviating the pressure of IPTV traffic. First, an 802.11ax access optimization architecture for HDHB WLAN is constructed, aimed at alleviating traffic pressure and meeting the quality requirements of IPTV and other services by modifying access processes of terminals. Next, a terminal information acquisition and interactive access control strategy based on the trigger frame is devised to obtain accurate parameter information and facilitate orderly concurrent access control for high-density terminals. Additionally, a load balancing and competition avoidance oriented access control method for HDHB WLAN is proposed, including an access optimization control model, and an access strategy generation algorithm based on the Improved DQN algorithm. Finally, simulation results show that the global throughput and load balancing of HDHB WLAN are improved, consequently reducing overall WLAN traffic pressure.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"401-412"},"PeriodicalIF":4.5,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DMML: Deep Multi-Prior and Multi-Discriminator Learning for Underwater Image Enhancement DMML:用于水下图像增强的深度多先验和多判别器学习
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-25 DOI: 10.1109/TBC.2024.3349773
Alireza Esmaeilzehi;Yang Ou;M. Omair Ahmad;M. N. S. Swamy
Enhancing the quality of the images acquired under the water environments is crucial in many broadcast technologies. As the richness of the features generated by deep underwater image enhancement networks improves, the visual signals with higher qualities can be yielded. In view of this, in this paper, we propose a new deep network for the task of underwater image enhancement, in which the network feature generation process is guided by the prior information obtained from various underwater medium transmission map and atmospheric light estimation methods. Further, in order to obtain high values for different image quality assessment metrics associated with the images produced by the proposed network, we introduce a multi-stage training process for our network. In the first stage, the proposed network is trained with the conventional supervised learning technique, whereas, in the second stage, the training process of the network is carried out by the adversarial learning technique. Finally, in the third stage, the training of the network obtained by the conventional supervised learning is continued by the guidance of the one trained by the adversarial learning technique. In the development of the adversarial learning-based stage of our network, we propose a novel multi-discriminator generative adversarial network, which is able to produce images with more realistic textures and structures. The proposed multi-discriminator generative adversarial network employs the discrimination process between the real and fake data in various underwater environment color spaces. The results of different experimentations show the effectiveness of the proposed scheme in restoring the high-quality images compared to the other state-of-the-art deep underwater image enhancement networks.
在许多广播技术中,提高水下环境图像的质量至关重要。随着深度水下图像增强网络生成的特征丰富度的提高,可以生成质量更高的视觉信号。有鉴于此,本文针对水下图像增强任务提出了一种新的深度网络,该网络的特征生成过程以各种水下介质传输图和大气光估计方法获得的先验信息为指导。此外,为了获得与拟建网络生成的图像相关的不同图像质量评估指标的高值,我们为网络引入了多阶段训练过程。在第一阶段,我们采用传统的监督学习技术对网络进行训练;在第二阶段,我们采用对抗学习技术对网络进行训练。最后,在第三阶段,在对抗学习技术的指导下,继续训练通过传统监督学习获得的网络。在开发基于对抗学习阶段的网络时,我们提出了一种新颖的多判别器生成对抗网络,它能够生成具有更逼真纹理和结构的图像。所提出的多判别器生成式对抗网络在各种水下环境色彩空间中采用了真假数据判别过程。不同的实验结果表明,与其他最先进的深度水下图像增强网络相比,所提出的方案能有效地还原高质量的图像。
{"title":"DMML: Deep Multi-Prior and Multi-Discriminator Learning for Underwater Image Enhancement","authors":"Alireza Esmaeilzehi;Yang Ou;M. Omair Ahmad;M. N. S. Swamy","doi":"10.1109/TBC.2024.3349773","DOIUrl":"10.1109/TBC.2024.3349773","url":null,"abstract":"Enhancing the quality of the images acquired under the water environments is crucial in many broadcast technologies. As the richness of the features generated by deep underwater image enhancement networks improves, the visual signals with higher qualities can be yielded. In view of this, in this paper, we propose a new deep network for the task of underwater image enhancement, in which the network feature generation process is guided by the prior information obtained from various underwater medium transmission map and atmospheric light estimation methods. Further, in order to obtain high values for different image quality assessment metrics associated with the images produced by the proposed network, we introduce a multi-stage training process for our network. In the first stage, the proposed network is trained with the conventional supervised learning technique, whereas, in the second stage, the training process of the network is carried out by the adversarial learning technique. Finally, in the third stage, the training of the network obtained by the conventional supervised learning is continued by the guidance of the one trained by the adversarial learning technique. In the development of the adversarial learning-based stage of our network, we propose a novel multi-discriminator generative adversarial network, which is able to produce images with more realistic textures and structures. The proposed multi-discriminator generative adversarial network employs the discrimination process between the real and fake data in various underwater environment color spaces. The results of different experimentations show the effectiveness of the proposed scheme in restoring the high-quality images compared to the other state-of-the-art deep underwater image enhancement networks.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"637-653"},"PeriodicalIF":4.5,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Rate LDPC Code Design for DTMB-A 为 DTMB-A 设计低速率 LDPC 码
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-22 DOI: 10.1109/TBC.2024.3349790
Zhitong He;Kewu Peng;Chao Zhang;Jian Song
Digital terrestrial television multimedia broadcasting-advanced (DTMB-A) proposed by China is served as a 2nd generation digital terrestrial television broadcasting (DTTB) standard with advanced forward error correction coding schemes. Nevertheless, to adapt low signal-to-noise ratio (SNR) scenarios such as in cloud transmission systems, LDPC codes with low rates are required for DTMB-A. In this paper, the new design of low-rate DTMB-A LDPC codes is presented systematically. Specifically, a rate-compatible Raptor-Like structure of low-rate DTMB-A LDPC codes is presented, which supports multiple low code rates with constant code length. Then a new construction method is proposed for low-rate DTMB-A LDPC codes, where progressive block extension is employed and the minimum distance is majorly optimized such that the minimum distance increases after each block extension. Finally, the performance of the constructed DTMB-A LDPC codes with two low code rates of 1/3 and 1/4 are simulated and compared with ATSC 3.0 LDPC codes, which demonstrates the effectiveness of our design.
中国提出的地面数字电视多媒体广播高级版(DTMB-A)是第二代地面数字电视广播(DTTB)标准,采用先进的前向纠错编码方案。然而,为适应云传输系统等低信噪比(SNR)场景,DTMB-A 需要低速率的 LDPC 码。本文系统地介绍了低速率 DTMB-A LDPC 码的新设计。具体来说,本文提出了一种速率兼容的 Raptor-Like 结构的低速率 DTMB-A LDPC 码,它支持恒定码长的多种低码率。然后,针对低速率 DTMB-A LDPC 码提出了一种新的构造方法,即采用渐进式区块扩展,并主要优化了最小距离,使每次区块扩展后的最小距离都在增加。最后,模拟了所构建的 DTMB-A LDPC 码在 1/3 和 1/4 两种低码率下的性能,并与 ATSC 3.0 LDPC 码进行了比较,从而证明了我们设计的有效性。
{"title":"Low-Rate LDPC Code Design for DTMB-A","authors":"Zhitong He;Kewu Peng;Chao Zhang;Jian Song","doi":"10.1109/TBC.2024.3349790","DOIUrl":"10.1109/TBC.2024.3349790","url":null,"abstract":"Digital terrestrial television multimedia broadcasting-advanced (DTMB-A) proposed by China is served as a 2nd generation digital terrestrial television broadcasting (DTTB) standard with advanced forward error correction coding schemes. Nevertheless, to adapt low signal-to-noise ratio (SNR) scenarios such as in cloud transmission systems, LDPC codes with low rates are required for DTMB-A. In this paper, the new design of low-rate DTMB-A LDPC codes is presented systematically. Specifically, a rate-compatible Raptor-Like structure of low-rate DTMB-A LDPC codes is presented, which supports multiple low code rates with constant code length. Then a new construction method is proposed for low-rate DTMB-A LDPC codes, where progressive block extension is employed and the minimum distance is majorly optimized such that the minimum distance increases after each block extension. Finally, the performance of the constructed DTMB-A LDPC codes with two low code rates of 1/3 and 1/4 are simulated and compared with ATSC 3.0 LDPC codes, which demonstrates the effectiveness of our design.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"739-746"},"PeriodicalIF":4.5,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EffiHDR: An Efficient Framework for HDRTV Reconstruction and Enhancement in UHD Systems EffiHDR:超高清系统中 HDRTV 重建和增强的高效框架
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-10 DOI: 10.1109/TBC.2023.3345657
Hengsheng Zhang;Xueyi Zou;Guo Lu;Li Chen;Li Song;Wenjun Zhang
Recent advancements in SDRTV-to-HDRTV conversion have yielded impressive results in reconstructing high dynamic range television (HDRTV) videos from standard dynamic range television (SDRTV) videos. However, the practical applications of these techniques are limited for ultra-high definition (UHD) video systems due to their high computational and memory costs. In this paper, we propose EffiHDR, an efficient framework primarily operating in the downsampled space, effectively reducing the computational and memory demands. Our framework comprises a real-time SDRTV-to-HDRTV Reconstruction model and a plug-and-play HDRTV Enhancement model. The SDRTV-to-HDRTV Reconstruction model learns affine transformation coefficients instead of directly predicting output pixels to preserve high-frequency information and mitigate information loss caused by downsampling. It decomposes SDRTV-to-HDR mapping into pixel intensity-dependent and local-dependent affine transformations. The pixel intensity-dependent transformation leverages global contexts and pixel intensity conditions to transform SDRTV pixels to the HDRTV domain. The local-dependent transformation predicts affine coefficients based on local contexts, further enhancing dynamic range, local contrast, and color tone. Additionally, we introduce a plug-and-play HDRTV Enhancement model based on an efficient Transformer-based U-net, which enhances luminance and color details in challenging recovery scenarios. Experimental results demonstrate that our SDRTV-to-HDRTV Reconstruction model achieves real-time 4K conversion with impressive performance. When combined with the HDRTV Enhancement model, our approach outperforms state-of-the-art methods in performance and efficiency.
SDRTV 到 HDRTV 转换技术的最新进展在从标准动态范围电视(SDRTV)视频重建高动态范围电视(HDRTV)视频方面取得了令人瞩目的成果。然而,由于计算和内存成本较高,这些技术在超高清(UHD)视频系统中的实际应用受到了限制。在本文中,我们提出了 EffiHDR,这是一种主要在降采样空间运行的高效框架,可有效降低计算和内存需求。我们的框架包括一个实时 SDRTV 转 HDRTV 重建模型和一个即插即用的 HDRTV 增强模型。SDRTV 到 HDRTV 重构模型学习仿射变换系数,而不是直接预测输出像素,以保留高频信息并减少下采样造成的信息损失。它将 SDRTV 到 HDR 映射分解为依赖像素强度的仿射变换和依赖局部的仿射变换。像素强度相关变换利用全局上下文和像素强度条件将 SDRTV 像素变换到 HDRTV 域。局部相关变换根据局部背景预测仿射系数,进一步增强动态范围、局部对比度和色调。此外,我们还引入了基于高效变换器 U 网的即插即用 HDRTV 增强模型,可在具有挑战性的恢复场景中增强亮度和色彩细节。实验结果表明,我们的 SDRTV 转 HDRTV 重建模型实现了实时 4K 转换,性能令人印象深刻。当与 HDRTV 增强模型相结合时,我们的方法在性能和效率上都优于最先进的方法。
{"title":"EffiHDR: An Efficient Framework for HDRTV Reconstruction and Enhancement in UHD Systems","authors":"Hengsheng Zhang;Xueyi Zou;Guo Lu;Li Chen;Li Song;Wenjun Zhang","doi":"10.1109/TBC.2023.3345657","DOIUrl":"10.1109/TBC.2023.3345657","url":null,"abstract":"Recent advancements in SDRTV-to-HDRTV conversion have yielded impressive results in reconstructing high dynamic range television (HDRTV) videos from standard dynamic range television (SDRTV) videos. However, the practical applications of these techniques are limited for ultra-high definition (UHD) video systems due to their high computational and memory costs. In this paper, we propose EffiHDR, an efficient framework primarily operating in the downsampled space, effectively reducing the computational and memory demands. Our framework comprises a real-time SDRTV-to-HDRTV Reconstruction model and a plug-and-play HDRTV Enhancement model. The SDRTV-to-HDRTV Reconstruction model learns affine transformation coefficients instead of directly predicting output pixels to preserve high-frequency information and mitigate information loss caused by downsampling. It decomposes SDRTV-to-HDR mapping into pixel intensity-dependent and local-dependent affine transformations. The pixel intensity-dependent transformation leverages global contexts and pixel intensity conditions to transform SDRTV pixels to the HDRTV domain. The local-dependent transformation predicts affine coefficients based on local contexts, further enhancing dynamic range, local contrast, and color tone. Additionally, we introduce a plug-and-play HDRTV Enhancement model based on an efficient Transformer-based U-net, which enhances luminance and color details in challenging recovery scenarios. Experimental results demonstrate that our SDRTV-to-HDRTV Reconstruction model achieves real-time 4K conversion with impressive performance. When combined with the HDRTV Enhancement model, our approach outperforms state-of-the-art methods in performance and efficiency.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"620-636"},"PeriodicalIF":4.5,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retina-U: A Two-Level Real-Time Analytics Framework for UHD Live Video Streaming Retina-U:用于超高清实时视频流的两级实时分析框架
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-10 DOI: 10.1109/TBC.2023.3345646
Wei Zhang;Yunpeng Jing;Yuan Zhang;Tao Lin;Jinyao Yan
UHD live video streaming, with its high video resolution, offers a wealth of fine-grained scene details, presenting opportunities for intricate video analytics. However, current real-time video streaming analytics solutions are inadequate in analyzing these detailed features, often leading to low accuracy in the analysis of small objects with fine details. Furthermore, due to the high bitrate and precision of UHD streaming, existing real-time inference frameworks typically suffer from low analyzed frame rate caused by the significant computational cost involved. To meet the accuracy requirement and improve the analyzed frame rate, we introduce Retina-U, a real-time analytics framework for UHD video streaming. Specifically, we first present SECT, a real-time DNN model level inference model to enhance inference accuracy in dynamic UHD streaming with an abundance of small objects. SECT uses a slicing-based enhanced inference (SEI) method and Cascade Sparse Queries (CSQ) based-fine tuning to improve the accuracy, and leverages a lightweight tracker to achieve high analyzed frame rate. At the system level, to further improve the inference accuracy and bolster the analyzed frame rate, we propose a deep reinforcement learning-based resource management algorithm for real-time joint network adaptation, resource allocation, and server selection. By simultaneously considering the network and computational resources, we can maximize the comprehensive analytic performance in a dynamic and complex environment. Experimental results demonstrate the effectiveness of Retina-U, showcasing improvements in accuracy of up to 38.01% and inference speed acceleration of up to 24.33%.
超高清实时视频流具有很高的视频分辨率,提供了大量精细的场景细节,为复杂的视频分析提供了机会。然而,当前的实时视频流分析解决方案在分析这些细节特征方面存在不足,往往导致对具有精细细节的小物体的分析精度较低。此外,由于 UHD 流媒体的码率高、精度高,现有的实时推理框架通常会因计算成本过高而导致分析帧率过低。为了满足精度要求并提高分析帧率,我们推出了用于 UHD 视频流的实时分析框架 Retina-U。具体来说,我们首先介绍了 SECT,这是一种实时 DNN 模型级推理模型,用于提高具有大量小物体的动态 UHD 流媒体的推理精度。SECT 采用基于切片的增强推理(SEI)方法和基于级联稀疏查询(CSQ)的微调来提高推理的准确性,并利用轻量级跟踪器来实现较高的分析帧频。在系统层面,为了进一步提高推理精度和分析帧率,我们提出了一种基于深度强化学习的资源管理算法,用于实时联合网络适应、资源分配和服务器选择。通过同时考虑网络和计算资源,我们可以在动态复杂的环境中最大限度地提高综合分析性能。实验结果证明了 Retina-U 的有效性,其准确率提高了 38.01%,推理速度加快了 24.33%。
{"title":"Retina-U: A Two-Level Real-Time Analytics Framework for UHD Live Video Streaming","authors":"Wei Zhang;Yunpeng Jing;Yuan Zhang;Tao Lin;Jinyao Yan","doi":"10.1109/TBC.2023.3345646","DOIUrl":"10.1109/TBC.2023.3345646","url":null,"abstract":"UHD live video streaming, with its high video resolution, offers a wealth of fine-grained scene details, presenting opportunities for intricate video analytics. However, current real-time video streaming analytics solutions are inadequate in analyzing these detailed features, often leading to low accuracy in the analysis of small objects with fine details. Furthermore, due to the high bitrate and precision of UHD streaming, existing real-time inference frameworks typically suffer from low analyzed frame rate caused by the significant computational cost involved. To meet the accuracy requirement and improve the analyzed frame rate, we introduce Retina-U, a real-time analytics framework for UHD video streaming. Specifically, we first present SECT, a real-time DNN model level inference model to enhance inference accuracy in dynamic UHD streaming with an abundance of small objects. SECT uses a slicing-based enhanced inference (SEI) method and Cascade Sparse Queries (CSQ) based-fine tuning to improve the accuracy, and leverages a lightweight tracker to achieve high analyzed frame rate. At the system level, to further improve the inference accuracy and bolster the analyzed frame rate, we propose a deep reinforcement learning-based resource management algorithm for real-time joint network adaptation, resource allocation, and server selection. By simultaneously considering the network and computational resources, we can maximize the comprehensive analytic performance in a dynamic and complex environment. Experimental results demonstrate the effectiveness of Retina-U, showcasing improvements in accuracy of up to 38.01% and inference speed acceleration of up to 24.33%.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"429-440"},"PeriodicalIF":4.5,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCOTSC: Green Coding Techniques for Online Teaching Screen Content Implemented in AVS3 GCOTSC:在 AVS3 中实施的在线教学屏幕内容绿色编码技术
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-10 DOI: 10.1109/TBC.2023.3340042
Liping Zhao;Zhuge Yan;Zehao Wang;Xu Wang;Keli Hu;Huawen Liu;Tao Lin
During and following the global COVID-19 pandemic, the use of screen content coding applications such as large-scale cloud office, online teaching, and teleconferencing has surged. The vast amount of online data generated by these applications, especially online teaching, has become a vital source of Internet video traffic. Consequently, there is an urgent need for low-complexity online teaching screen content (OTSC) coding techniques. Energy-efficient low-complexity green coding techniques for OTSC, named GCOTSC, are proposed based on the unique characteristics of OTSC. In the inter-frame prediction mode, the input frames are first divided into visually constant frames (VCFs) and non-VCFs using a VCF identifier. A new VCF mode has been proposed to code VCFs efficiently. In the intra-frame prediction mode, a heuristic multi-type least probable option skip mode based on static and dynamic historical information is proposed. Compared with the AVS3 screen content coding algorithm, using the typical online teaching screen content and AVS3 SCC common test condition, the experimental results show that the GOTSC achieves an average 59.06% reduction of encoding complexity in low delay configuration, with almost no impact on coding efficiency.
在 COVID-19 全球大流行期间和之后,屏幕内容编码应用(如大型云办公、在线教学和远程会议)的使用激增。这些应用(尤其是在线教学)产生的大量在线数据已成为互联网视频流量的重要来源。因此,迫切需要低复杂度的在线教学屏幕内容(OTSC)编码技术。根据 OTSC 的独特性,提出了针对 OTSC 的高能效、低复杂度绿色编码技术,命名为 GCOTSC。在帧间预测模式中,首先使用 VCF 标识符将输入帧分为视觉恒定帧(VCF)和非 VCF。我们提出了一种新的 VCF 模式,以对 VCF 进行高效编码。在帧内预测模式中,提出了一种基于静态和动态历史信息的启发式多类型最小可能选项跳过模式。与 AVS3 屏幕内容编码算法相比,使用典型的在线教学屏幕内容和 AVS3 SCC 通用测试条件,实验结果表明,在低延迟配置下,GOTSC 平均降低了 59.06% 的编码复杂度,几乎不影响编码效率。
{"title":"GCOTSC: Green Coding Techniques for Online Teaching Screen Content Implemented in AVS3","authors":"Liping Zhao;Zhuge Yan;Zehao Wang;Xu Wang;Keli Hu;Huawen Liu;Tao Lin","doi":"10.1109/TBC.2023.3340042","DOIUrl":"10.1109/TBC.2023.3340042","url":null,"abstract":"During and following the global COVID-19 pandemic, the use of screen content coding applications such as large-scale cloud office, online teaching, and teleconferencing has surged. The vast amount of online data generated by these applications, especially online teaching, has become a vital source of Internet video traffic. Consequently, there is an urgent need for low-complexity online teaching screen content (OTSC) coding techniques. Energy-efficient low-complexity green coding techniques for OTSC, named GCOTSC, are proposed based on the unique characteristics of OTSC. In the inter-frame prediction mode, the input frames are first divided into visually constant frames (VCFs) and non-VCFs using a VCF identifier. A new VCF mode has been proposed to code VCFs efficiently. In the intra-frame prediction mode, a heuristic multi-type least probable option skip mode based on static and dynamic historical information is proposed. Compared with the AVS3 screen content coding algorithm, using the typical online teaching screen content and AVS3 SCC common test condition, the experimental results show that the GOTSC achieves an average 59.06% reduction of encoding complexity in low delay configuration, with almost no impact on coding efficiency.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 1","pages":"174-182"},"PeriodicalIF":4.5,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Decoding of Polar Codes for Digital Broadcasting Services in 5G 为 5G 数字广播服务快速解码极性编码
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-05 DOI: 10.1109/TBC.2023.3345642
He Sun;Emanuele Viterbo;Bin Dai;Rongke Liu
The rapid revolution of mobile communication technology provides a great avenue for efficient information transmission to facilitate digital multimedia services. In current 5G systems, broadcasting technology is used to improve the efficiency of information transmission, and polar codes are adopted to improve data transmission reliability. Reducing the decoding latency of polar codes is of great importance for ultra-low-latency and reliable data transmission for 5G broadcasting, which still remains a challenge in digital broadcasting services. In this paper, we propose an aggregation method to construct constituent codes for reducing the decoding latency of polar codes. The aggregation method jointly exploits the structure and reliability of constituent codes to increase the lengths of constituent codes that can be decoded in parallel, thus significantly reducing the decoding latency. Furthermore, an efficient parallel decoding algorithm is integrated with the proposed aggregation method to efficiently decode the reliable constituent codes without sacrificing error-correction performance. Simulation results show that the proposed method significantly reduces the decoding latency as compared to the existing state-of-the-art schemes.
移动通信技术的迅猛发展为高效信息传输提供了广阔的空间,从而促进了数字多媒体服务的发展。在当前的 5G 系统中,广播技术被用来提高信息传输的效率,极地编码被用来提高数据传输的可靠性。降低极性码的解码延迟对 5G 广播的超低延迟和可靠数据传输具有重要意义,而这在数字广播服务中仍是一项挑战。本文提出了一种聚合方法来构建极性码的组成码,以降低极性码的解码延迟。该聚合方法共同利用了组成码的结构和可靠性,增加了可并行解码的组成码长度,从而显著降低了解码延迟。此外,一种高效的并行解码算法与所提出的聚合方法相结合,可在不牺牲纠错性能的情况下高效解码可靠的组成码。仿真结果表明,与现有的先进方案相比,所提出的方法大大减少了解码延迟。
{"title":"Fast Decoding of Polar Codes for Digital Broadcasting Services in 5G","authors":"He Sun;Emanuele Viterbo;Bin Dai;Rongke Liu","doi":"10.1109/TBC.2023.3345642","DOIUrl":"10.1109/TBC.2023.3345642","url":null,"abstract":"The rapid revolution of mobile communication technology provides a great avenue for efficient information transmission to facilitate digital multimedia services. In current 5G systems, broadcasting technology is used to improve the efficiency of information transmission, and polar codes are adopted to improve data transmission reliability. Reducing the decoding latency of polar codes is of great importance for ultra-low-latency and reliable data transmission for 5G broadcasting, which still remains a challenge in digital broadcasting services. In this paper, we propose an aggregation method to construct constituent codes for reducing the decoding latency of polar codes. The aggregation method jointly exploits the structure and reliability of constituent codes to increase the lengths of constituent codes that can be decoded in parallel, thus significantly reducing the decoding latency. Furthermore, an efficient parallel decoding algorithm is integrated with the proposed aggregation method to efficiently decode the reliable constituent codes without sacrificing error-correction performance. Simulation results show that the proposed method significantly reduces the decoding latency as compared to the existing state-of-the-art schemes.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"731-738"},"PeriodicalIF":4.5,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Broadcasting
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1