首页 > 最新文献

IEEE Transactions on Broadcasting最新文献

英文 中文
Low-Rate LDPC Code Design for DTMB-A 为 DTMB-A 设计低速率 LDPC 码
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-22 DOI: 10.1109/TBC.2024.3349790
Zhitong He;Kewu Peng;Chao Zhang;Jian Song
Digital terrestrial television multimedia broadcasting-advanced (DTMB-A) proposed by China is served as a 2nd generation digital terrestrial television broadcasting (DTTB) standard with advanced forward error correction coding schemes. Nevertheless, to adapt low signal-to-noise ratio (SNR) scenarios such as in cloud transmission systems, LDPC codes with low rates are required for DTMB-A. In this paper, the new design of low-rate DTMB-A LDPC codes is presented systematically. Specifically, a rate-compatible Raptor-Like structure of low-rate DTMB-A LDPC codes is presented, which supports multiple low code rates with constant code length. Then a new construction method is proposed for low-rate DTMB-A LDPC codes, where progressive block extension is employed and the minimum distance is majorly optimized such that the minimum distance increases after each block extension. Finally, the performance of the constructed DTMB-A LDPC codes with two low code rates of 1/3 and 1/4 are simulated and compared with ATSC 3.0 LDPC codes, which demonstrates the effectiveness of our design.
中国提出的地面数字电视多媒体广播高级版(DTMB-A)是第二代地面数字电视广播(DTTB)标准,采用先进的前向纠错编码方案。然而,为适应云传输系统等低信噪比(SNR)场景,DTMB-A 需要低速率的 LDPC 码。本文系统地介绍了低速率 DTMB-A LDPC 码的新设计。具体来说,本文提出了一种速率兼容的 Raptor-Like 结构的低速率 DTMB-A LDPC 码,它支持恒定码长的多种低码率。然后,针对低速率 DTMB-A LDPC 码提出了一种新的构造方法,即采用渐进式区块扩展,并主要优化了最小距离,使每次区块扩展后的最小距离都在增加。最后,模拟了所构建的 DTMB-A LDPC 码在 1/3 和 1/4 两种低码率下的性能,并与 ATSC 3.0 LDPC 码进行了比较,从而证明了我们设计的有效性。
{"title":"Low-Rate LDPC Code Design for DTMB-A","authors":"Zhitong He;Kewu Peng;Chao Zhang;Jian Song","doi":"10.1109/TBC.2024.3349790","DOIUrl":"10.1109/TBC.2024.3349790","url":null,"abstract":"Digital terrestrial television multimedia broadcasting-advanced (DTMB-A) proposed by China is served as a 2nd generation digital terrestrial television broadcasting (DTTB) standard with advanced forward error correction coding schemes. Nevertheless, to adapt low signal-to-noise ratio (SNR) scenarios such as in cloud transmission systems, LDPC codes with low rates are required for DTMB-A. In this paper, the new design of low-rate DTMB-A LDPC codes is presented systematically. Specifically, a rate-compatible Raptor-Like structure of low-rate DTMB-A LDPC codes is presented, which supports multiple low code rates with constant code length. Then a new construction method is proposed for low-rate DTMB-A LDPC codes, where progressive block extension is employed and the minimum distance is majorly optimized such that the minimum distance increases after each block extension. Finally, the performance of the constructed DTMB-A LDPC codes with two low code rates of 1/3 and 1/4 are simulated and compared with ATSC 3.0 LDPC codes, which demonstrates the effectiveness of our design.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"739-746"},"PeriodicalIF":4.5,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EffiHDR: An Efficient Framework for HDRTV Reconstruction and Enhancement in UHD Systems EffiHDR:超高清系统中 HDRTV 重建和增强的高效框架
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-10 DOI: 10.1109/TBC.2023.3345657
Hengsheng Zhang;Xueyi Zou;Guo Lu;Li Chen;Li Song;Wenjun Zhang
Recent advancements in SDRTV-to-HDRTV conversion have yielded impressive results in reconstructing high dynamic range television (HDRTV) videos from standard dynamic range television (SDRTV) videos. However, the practical applications of these techniques are limited for ultra-high definition (UHD) video systems due to their high computational and memory costs. In this paper, we propose EffiHDR, an efficient framework primarily operating in the downsampled space, effectively reducing the computational and memory demands. Our framework comprises a real-time SDRTV-to-HDRTV Reconstruction model and a plug-and-play HDRTV Enhancement model. The SDRTV-to-HDRTV Reconstruction model learns affine transformation coefficients instead of directly predicting output pixels to preserve high-frequency information and mitigate information loss caused by downsampling. It decomposes SDRTV-to-HDR mapping into pixel intensity-dependent and local-dependent affine transformations. The pixel intensity-dependent transformation leverages global contexts and pixel intensity conditions to transform SDRTV pixels to the HDRTV domain. The local-dependent transformation predicts affine coefficients based on local contexts, further enhancing dynamic range, local contrast, and color tone. Additionally, we introduce a plug-and-play HDRTV Enhancement model based on an efficient Transformer-based U-net, which enhances luminance and color details in challenging recovery scenarios. Experimental results demonstrate that our SDRTV-to-HDRTV Reconstruction model achieves real-time 4K conversion with impressive performance. When combined with the HDRTV Enhancement model, our approach outperforms state-of-the-art methods in performance and efficiency.
SDRTV 到 HDRTV 转换技术的最新进展在从标准动态范围电视(SDRTV)视频重建高动态范围电视(HDRTV)视频方面取得了令人瞩目的成果。然而,由于计算和内存成本较高,这些技术在超高清(UHD)视频系统中的实际应用受到了限制。在本文中,我们提出了 EffiHDR,这是一种主要在降采样空间运行的高效框架,可有效降低计算和内存需求。我们的框架包括一个实时 SDRTV 转 HDRTV 重建模型和一个即插即用的 HDRTV 增强模型。SDRTV 到 HDRTV 重构模型学习仿射变换系数,而不是直接预测输出像素,以保留高频信息并减少下采样造成的信息损失。它将 SDRTV 到 HDR 映射分解为依赖像素强度的仿射变换和依赖局部的仿射变换。像素强度相关变换利用全局上下文和像素强度条件将 SDRTV 像素变换到 HDRTV 域。局部相关变换根据局部背景预测仿射系数,进一步增强动态范围、局部对比度和色调。此外,我们还引入了基于高效变换器 U 网的即插即用 HDRTV 增强模型,可在具有挑战性的恢复场景中增强亮度和色彩细节。实验结果表明,我们的 SDRTV 转 HDRTV 重建模型实现了实时 4K 转换,性能令人印象深刻。当与 HDRTV 增强模型相结合时,我们的方法在性能和效率上都优于最先进的方法。
{"title":"EffiHDR: An Efficient Framework for HDRTV Reconstruction and Enhancement in UHD Systems","authors":"Hengsheng Zhang;Xueyi Zou;Guo Lu;Li Chen;Li Song;Wenjun Zhang","doi":"10.1109/TBC.2023.3345657","DOIUrl":"10.1109/TBC.2023.3345657","url":null,"abstract":"Recent advancements in SDRTV-to-HDRTV conversion have yielded impressive results in reconstructing high dynamic range television (HDRTV) videos from standard dynamic range television (SDRTV) videos. However, the practical applications of these techniques are limited for ultra-high definition (UHD) video systems due to their high computational and memory costs. In this paper, we propose EffiHDR, an efficient framework primarily operating in the downsampled space, effectively reducing the computational and memory demands. Our framework comprises a real-time SDRTV-to-HDRTV Reconstruction model and a plug-and-play HDRTV Enhancement model. The SDRTV-to-HDRTV Reconstruction model learns affine transformation coefficients instead of directly predicting output pixels to preserve high-frequency information and mitigate information loss caused by downsampling. It decomposes SDRTV-to-HDR mapping into pixel intensity-dependent and local-dependent affine transformations. The pixel intensity-dependent transformation leverages global contexts and pixel intensity conditions to transform SDRTV pixels to the HDRTV domain. The local-dependent transformation predicts affine coefficients based on local contexts, further enhancing dynamic range, local contrast, and color tone. Additionally, we introduce a plug-and-play HDRTV Enhancement model based on an efficient Transformer-based U-net, which enhances luminance and color details in challenging recovery scenarios. Experimental results demonstrate that our SDRTV-to-HDRTV Reconstruction model achieves real-time 4K conversion with impressive performance. When combined with the HDRTV Enhancement model, our approach outperforms state-of-the-art methods in performance and efficiency.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"620-636"},"PeriodicalIF":4.5,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retina-U: A Two-Level Real-Time Analytics Framework for UHD Live Video Streaming Retina-U:用于超高清实时视频流的两级实时分析框架
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-10 DOI: 10.1109/TBC.2023.3345646
Wei Zhang;Yunpeng Jing;Yuan Zhang;Tao Lin;Jinyao Yan
UHD live video streaming, with its high video resolution, offers a wealth of fine-grained scene details, presenting opportunities for intricate video analytics. However, current real-time video streaming analytics solutions are inadequate in analyzing these detailed features, often leading to low accuracy in the analysis of small objects with fine details. Furthermore, due to the high bitrate and precision of UHD streaming, existing real-time inference frameworks typically suffer from low analyzed frame rate caused by the significant computational cost involved. To meet the accuracy requirement and improve the analyzed frame rate, we introduce Retina-U, a real-time analytics framework for UHD video streaming. Specifically, we first present SECT, a real-time DNN model level inference model to enhance inference accuracy in dynamic UHD streaming with an abundance of small objects. SECT uses a slicing-based enhanced inference (SEI) method and Cascade Sparse Queries (CSQ) based-fine tuning to improve the accuracy, and leverages a lightweight tracker to achieve high analyzed frame rate. At the system level, to further improve the inference accuracy and bolster the analyzed frame rate, we propose a deep reinforcement learning-based resource management algorithm for real-time joint network adaptation, resource allocation, and server selection. By simultaneously considering the network and computational resources, we can maximize the comprehensive analytic performance in a dynamic and complex environment. Experimental results demonstrate the effectiveness of Retina-U, showcasing improvements in accuracy of up to 38.01% and inference speed acceleration of up to 24.33%.
超高清实时视频流具有很高的视频分辨率,提供了大量精细的场景细节,为复杂的视频分析提供了机会。然而,当前的实时视频流分析解决方案在分析这些细节特征方面存在不足,往往导致对具有精细细节的小物体的分析精度较低。此外,由于 UHD 流媒体的码率高、精度高,现有的实时推理框架通常会因计算成本过高而导致分析帧率过低。为了满足精度要求并提高分析帧率,我们推出了用于 UHD 视频流的实时分析框架 Retina-U。具体来说,我们首先介绍了 SECT,这是一种实时 DNN 模型级推理模型,用于提高具有大量小物体的动态 UHD 流媒体的推理精度。SECT 采用基于切片的增强推理(SEI)方法和基于级联稀疏查询(CSQ)的微调来提高推理的准确性,并利用轻量级跟踪器来实现较高的分析帧频。在系统层面,为了进一步提高推理精度和分析帧率,我们提出了一种基于深度强化学习的资源管理算法,用于实时联合网络适应、资源分配和服务器选择。通过同时考虑网络和计算资源,我们可以在动态复杂的环境中最大限度地提高综合分析性能。实验结果证明了 Retina-U 的有效性,其准确率提高了 38.01%,推理速度加快了 24.33%。
{"title":"Retina-U: A Two-Level Real-Time Analytics Framework for UHD Live Video Streaming","authors":"Wei Zhang;Yunpeng Jing;Yuan Zhang;Tao Lin;Jinyao Yan","doi":"10.1109/TBC.2023.3345646","DOIUrl":"10.1109/TBC.2023.3345646","url":null,"abstract":"UHD live video streaming, with its high video resolution, offers a wealth of fine-grained scene details, presenting opportunities for intricate video analytics. However, current real-time video streaming analytics solutions are inadequate in analyzing these detailed features, often leading to low accuracy in the analysis of small objects with fine details. Furthermore, due to the high bitrate and precision of UHD streaming, existing real-time inference frameworks typically suffer from low analyzed frame rate caused by the significant computational cost involved. To meet the accuracy requirement and improve the analyzed frame rate, we introduce Retina-U, a real-time analytics framework for UHD video streaming. Specifically, we first present SECT, a real-time DNN model level inference model to enhance inference accuracy in dynamic UHD streaming with an abundance of small objects. SECT uses a slicing-based enhanced inference (SEI) method and Cascade Sparse Queries (CSQ) based-fine tuning to improve the accuracy, and leverages a lightweight tracker to achieve high analyzed frame rate. At the system level, to further improve the inference accuracy and bolster the analyzed frame rate, we propose a deep reinforcement learning-based resource management algorithm for real-time joint network adaptation, resource allocation, and server selection. By simultaneously considering the network and computational resources, we can maximize the comprehensive analytic performance in a dynamic and complex environment. Experimental results demonstrate the effectiveness of Retina-U, showcasing improvements in accuracy of up to 38.01% and inference speed acceleration of up to 24.33%.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"429-440"},"PeriodicalIF":4.5,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCOTSC: Green Coding Techniques for Online Teaching Screen Content Implemented in AVS3 GCOTSC:在 AVS3 中实施的在线教学屏幕内容绿色编码技术
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-10 DOI: 10.1109/TBC.2023.3340042
Liping Zhao;Zhuge Yan;Zehao Wang;Xu Wang;Keli Hu;Huawen Liu;Tao Lin
During and following the global COVID-19 pandemic, the use of screen content coding applications such as large-scale cloud office, online teaching, and teleconferencing has surged. The vast amount of online data generated by these applications, especially online teaching, has become a vital source of Internet video traffic. Consequently, there is an urgent need for low-complexity online teaching screen content (OTSC) coding techniques. Energy-efficient low-complexity green coding techniques for OTSC, named GCOTSC, are proposed based on the unique characteristics of OTSC. In the inter-frame prediction mode, the input frames are first divided into visually constant frames (VCFs) and non-VCFs using a VCF identifier. A new VCF mode has been proposed to code VCFs efficiently. In the intra-frame prediction mode, a heuristic multi-type least probable option skip mode based on static and dynamic historical information is proposed. Compared with the AVS3 screen content coding algorithm, using the typical online teaching screen content and AVS3 SCC common test condition, the experimental results show that the GOTSC achieves an average 59.06% reduction of encoding complexity in low delay configuration, with almost no impact on coding efficiency.
在 COVID-19 全球大流行期间和之后,屏幕内容编码应用(如大型云办公、在线教学和远程会议)的使用激增。这些应用(尤其是在线教学)产生的大量在线数据已成为互联网视频流量的重要来源。因此,迫切需要低复杂度的在线教学屏幕内容(OTSC)编码技术。根据 OTSC 的独特性,提出了针对 OTSC 的高能效、低复杂度绿色编码技术,命名为 GCOTSC。在帧间预测模式中,首先使用 VCF 标识符将输入帧分为视觉恒定帧(VCF)和非 VCF。我们提出了一种新的 VCF 模式,以对 VCF 进行高效编码。在帧内预测模式中,提出了一种基于静态和动态历史信息的启发式多类型最小可能选项跳过模式。与 AVS3 屏幕内容编码算法相比,使用典型的在线教学屏幕内容和 AVS3 SCC 通用测试条件,实验结果表明,在低延迟配置下,GOTSC 平均降低了 59.06% 的编码复杂度,几乎不影响编码效率。
{"title":"GCOTSC: Green Coding Techniques for Online Teaching Screen Content Implemented in AVS3","authors":"Liping Zhao;Zhuge Yan;Zehao Wang;Xu Wang;Keli Hu;Huawen Liu;Tao Lin","doi":"10.1109/TBC.2023.3340042","DOIUrl":"10.1109/TBC.2023.3340042","url":null,"abstract":"During and following the global COVID-19 pandemic, the use of screen content coding applications such as large-scale cloud office, online teaching, and teleconferencing has surged. The vast amount of online data generated by these applications, especially online teaching, has become a vital source of Internet video traffic. Consequently, there is an urgent need for low-complexity online teaching screen content (OTSC) coding techniques. Energy-efficient low-complexity green coding techniques for OTSC, named GCOTSC, are proposed based on the unique characteristics of OTSC. In the inter-frame prediction mode, the input frames are first divided into visually constant frames (VCFs) and non-VCFs using a VCF identifier. A new VCF mode has been proposed to code VCFs efficiently. In the intra-frame prediction mode, a heuristic multi-type least probable option skip mode based on static and dynamic historical information is proposed. Compared with the AVS3 screen content coding algorithm, using the typical online teaching screen content and AVS3 SCC common test condition, the experimental results show that the GOTSC achieves an average 59.06% reduction of encoding complexity in low delay configuration, with almost no impact on coding efficiency.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 1","pages":"174-182"},"PeriodicalIF":4.5,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Decoding of Polar Codes for Digital Broadcasting Services in 5G 为 5G 数字广播服务快速解码极性编码
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-05 DOI: 10.1109/TBC.2023.3345642
He Sun;Emanuele Viterbo;Bin Dai;Rongke Liu
The rapid revolution of mobile communication technology provides a great avenue for efficient information transmission to facilitate digital multimedia services. In current 5G systems, broadcasting technology is used to improve the efficiency of information transmission, and polar codes are adopted to improve data transmission reliability. Reducing the decoding latency of polar codes is of great importance for ultra-low-latency and reliable data transmission for 5G broadcasting, which still remains a challenge in digital broadcasting services. In this paper, we propose an aggregation method to construct constituent codes for reducing the decoding latency of polar codes. The aggregation method jointly exploits the structure and reliability of constituent codes to increase the lengths of constituent codes that can be decoded in parallel, thus significantly reducing the decoding latency. Furthermore, an efficient parallel decoding algorithm is integrated with the proposed aggregation method to efficiently decode the reliable constituent codes without sacrificing error-correction performance. Simulation results show that the proposed method significantly reduces the decoding latency as compared to the existing state-of-the-art schemes.
移动通信技术的迅猛发展为高效信息传输提供了广阔的空间,从而促进了数字多媒体服务的发展。在当前的 5G 系统中,广播技术被用来提高信息传输的效率,极地编码被用来提高数据传输的可靠性。降低极性码的解码延迟对 5G 广播的超低延迟和可靠数据传输具有重要意义,而这在数字广播服务中仍是一项挑战。本文提出了一种聚合方法来构建极性码的组成码,以降低极性码的解码延迟。该聚合方法共同利用了组成码的结构和可靠性,增加了可并行解码的组成码长度,从而显著降低了解码延迟。此外,一种高效的并行解码算法与所提出的聚合方法相结合,可在不牺牲纠错性能的情况下高效解码可靠的组成码。仿真结果表明,与现有的先进方案相比,所提出的方法大大减少了解码延迟。
{"title":"Fast Decoding of Polar Codes for Digital Broadcasting Services in 5G","authors":"He Sun;Emanuele Viterbo;Bin Dai;Rongke Liu","doi":"10.1109/TBC.2023.3345642","DOIUrl":"10.1109/TBC.2023.3345642","url":null,"abstract":"The rapid revolution of mobile communication technology provides a great avenue for efficient information transmission to facilitate digital multimedia services. In current 5G systems, broadcasting technology is used to improve the efficiency of information transmission, and polar codes are adopted to improve data transmission reliability. Reducing the decoding latency of polar codes is of great importance for ultra-low-latency and reliable data transmission for 5G broadcasting, which still remains a challenge in digital broadcasting services. In this paper, we propose an aggregation method to construct constituent codes for reducing the decoding latency of polar codes. The aggregation method jointly exploits the structure and reliability of constituent codes to increase the lengths of constituent codes that can be decoded in parallel, thus significantly reducing the decoding latency. Furthermore, an efficient parallel decoding algorithm is integrated with the proposed aggregation method to efficiently decode the reliable constituent codes without sacrificing error-correction performance. Simulation results show that the proposed method significantly reduces the decoding latency as compared to the existing state-of-the-art schemes.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"731-738"},"PeriodicalIF":4.5,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quality-of-Experience Evaluation for Digital Twins in 6G Network Environments 6G 网络环境中数字双胞胎的体验质量评估
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-05 DOI: 10.1109/TBC.2023.3345656
Zicheng Zhang;Yingjie Zhou;Long Teng;Wei Sun;Chunyi Li;Xiongkuo Min;Xiao-Ping Zhang;Guangtao Zhai
As wireless technology continues its rapid evolution, the sixth-generation (6G) networks are capable of offering exceptionally high data transmission rates as well as low latency, which is promisingly able to meet the high-demand needs for digital twins (DTs). Quality-of-experience (QoE) in this situation, which refers to the users’ overall satisfaction and perception of the provided DT service in 6G networks, is significant to optimize the service and help improve the users’ experience. Despite progress in developing theories and systems for digital twin transmission under 6G networks, the assessment of QoE for users falls behind. To address this gap, our paper introduces the first QoE evaluation database for human digital twins (HDTs) in 6G network environments, aiming to systematically analyze and quantify the related quality factors. We utilize a mmWave network model for channel capacity simulation and employ high-quality digital humans as source models, which are further animated, encoded, and distorted for final QoE evaluation. Subjective quality ratings are collected from a well-controlled subjective experiment for the 400 generated HDT sequences. Additionally, we propose a novel QoE evaluation metric that considers both quality-of-service (QoS) and content-quality features. Experimental results indicate that our model outperforms existing state-of-the-art QoE evaluation models and other competitive quality assessment models, thus making significant contributions to the domain of 6G network applications for HDTs.
随着无线技术的持续快速发展,第六代(6G)网络能够提供超高的数据传输速率和低延迟,有望满足数字孪生(DT)的高需求。在这种情况下,体验质量(QoE)是指用户对 6G 网络中提供的数字孪生服务的总体满意度和感知,对于优化服务和帮助改善用户体验具有重要意义。尽管 6G 网络下数字孪生传输理论和系统的开发取得了进展,但对用户 QoE 的评估却相对滞后。为了弥补这一差距,我们的论文首次引入了 6G 网络环境下人类数字孪生(HDT)的 QoE 评估数据库,旨在系统地分析和量化相关的质量因素。我们利用毫米波网络模型进行信道容量模拟,并采用高质量的数字人作为源模型,进一步对其进行动画、编码和变形,以进行最终的 QoE 评估。对生成的 400 个 HDT 序列的主观质量评分是通过一个控制良好的主观实验收集的。此外,我们还提出了一种新颖的 QoE 评估指标,该指标同时考虑了服务质量(QoS)和内容质量特征。实验结果表明,我们的模型优于现有的最先进 QoE 评估模型和其他有竞争力的质量评估模型,从而为高清晰度电视的 6G 网络应用领域做出了重大贡献。
{"title":"Quality-of-Experience Evaluation for Digital Twins in 6G Network Environments","authors":"Zicheng Zhang;Yingjie Zhou;Long Teng;Wei Sun;Chunyi Li;Xiongkuo Min;Xiao-Ping Zhang;Guangtao Zhai","doi":"10.1109/TBC.2023.3345656","DOIUrl":"10.1109/TBC.2023.3345656","url":null,"abstract":"As wireless technology continues its rapid evolution, the sixth-generation (6G) networks are capable of offering exceptionally high data transmission rates as well as low latency, which is promisingly able to meet the high-demand needs for digital twins (DTs). Quality-of-experience (QoE) in this situation, which refers to the users’ overall satisfaction and perception of the provided DT service in 6G networks, is significant to optimize the service and help improve the users’ experience. Despite progress in developing theories and systems for digital twin transmission under 6G networks, the assessment of QoE for users falls behind. To address this gap, our paper introduces the first QoE evaluation database for human digital twins (HDTs) in 6G network environments, aiming to systematically analyze and quantify the related quality factors. We utilize a mmWave network model for channel capacity simulation and employ high-quality digital humans as source models, which are further animated, encoded, and distorted for final QoE evaluation. Subjective quality ratings are collected from a well-controlled subjective experiment for the 400 generated HDT sequences. Additionally, we propose a novel QoE evaluation metric that considers both quality-of-service (QoS) and content-quality features. Experimental results indicate that our model outperforms existing state-of-the-art QoE evaluation models and other competitive quality assessment models, thus making significant contributions to the domain of 6G network applications for HDTs.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"995-1007"},"PeriodicalIF":3.2,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Omnidirectional Video Quality Assessment With Causal Intervention 通过因果干预进行全方位视频质量评估
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-03 DOI: 10.1109/TBC.2023.3342707
Zongyao Hu;Lixiong Liu;Qingbing Sang
Spherical signals of omnidirectional videos need to be projected to a 2D plane for transmission or storage. The projection will produce geometrical deformation that affects the feature representation of Convolutional Neural Networks (CNN) on the perception of omnidirectional videos. Currently developed omnidirectional video quality assessment (OVQA) methods leverage viewport images or spherical CNN to circumvent the geometrical deformation. However, the viewport-based methods neglect the interaction between viewport images while there lacks sufficient pre-training samples for taking spherical CNN as an efficient backbone in OVQA model. In this paper, we alleviate the influence of geometrical deformation from a causal perspective. A structural causal model is adopted to analyze the implicit reason for the disturbance of geometrical deformation on quality representation and we find the latitude factor confounds the feature representation and distorted contents. Based on this evidence, we propose a Causal Intervention-based Quality prediction Network (CIQNet) to alleviate the causal effect of the confounder. The resulting framework first segments the video content into sub-areas and trains feature encoders to obtain latitude-invariant representation for removing the relationship between the latitude and feature representation. Then the features of each sub-area are aggregated by estimated weights in a backdoor adjustment module to remove the relationship between the latitude and video contents. Finally, the temporal dependencies of aggregated features are modeled to implement the quality prediction. We evaluate the performance of CIQNet on three publicly available OVQA databases. The experimental results show CIQNet achieves competitive performance against state-of-art methods. The source code of CIQNet is available at: https://github.com/Aca4peop/CIQNet.
全向视频的球形信号需要投影到二维平面上进行传输或存储。投影会产生几何变形,影响卷积神经网络(CNN)对全向视频感知的特征表示。目前开发的全向视频质量评估(OVQA)方法利用视口图像或球形 CNN 来规避几何变形。然而,基于视口的方法忽视了视口图像之间的交互,同时缺乏足够的预训练样本,无法将球形 CNN 作为 OVQA 模型的有效骨干。本文从因果关系的角度减轻了几何形变的影响。我们采用结构因果模型来分析几何形变对质量表示的干扰的隐含原因,并发现纬度因素混淆了特征表示和失真的内容。基于这一证据,我们提出了基于因果干预的质量预测网络(CIQNet),以减轻混杂因素的因果效应。由此产生的框架首先将视频内容分割成子区域,并训练特征编码器以获得纬度不变的表示,从而消除纬度与特征表示之间的关系。然后,在一个后门调整模块中通过估计权重对每个子区域的特征进行聚合,以消除纬度与视频内容之间的关系。最后,对聚合特征的时间依赖性进行建模,以实现质量预测。我们在三个公开的 OVQA 数据库上评估了 CIQNet 的性能。实验结果表明,CIQNet 的性能与最先进的方法相比具有竞争力。CIQNet 的源代码可在以下网址获取:https://github.com/Aca4peop/CIQNet。
{"title":"Omnidirectional Video Quality Assessment With Causal Intervention","authors":"Zongyao Hu;Lixiong Liu;Qingbing Sang","doi":"10.1109/TBC.2023.3342707","DOIUrl":"10.1109/TBC.2023.3342707","url":null,"abstract":"Spherical signals of omnidirectional videos need to be projected to a 2D plane for transmission or storage. The projection will produce geometrical deformation that affects the feature representation of Convolutional Neural Networks (CNN) on the perception of omnidirectional videos. Currently developed omnidirectional video quality assessment (OVQA) methods leverage viewport images or spherical CNN to circumvent the geometrical deformation. However, the viewport-based methods neglect the interaction between viewport images while there lacks sufficient pre-training samples for taking spherical CNN as an efficient backbone in OVQA model. In this paper, we alleviate the influence of geometrical deformation from a causal perspective. A structural causal model is adopted to analyze the implicit reason for the disturbance of geometrical deformation on quality representation and we find the latitude factor confounds the feature representation and distorted contents. Based on this evidence, we propose a Causal Intervention-based Quality prediction Network (CIQNet) to alleviate the causal effect of the confounder. The resulting framework first segments the video content into sub-areas and trains feature encoders to obtain latitude-invariant representation for removing the relationship between the latitude and feature representation. Then the features of each sub-area are aggregated by estimated weights in a backdoor adjustment module to remove the relationship between the latitude and video contents. Finally, the temporal dependencies of aggregated features are modeled to implement the quality prediction. We evaluate the performance of CIQNet on three publicly available OVQA databases. The experimental results show CIQNet achieves competitive performance against state-of-art methods. The source code of CIQNet is available at: \u0000<uri>https://github.com/Aca4peop/CIQNet</uri>\u0000.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 1","pages":"238-250"},"PeriodicalIF":4.5,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind Image Quality Assessment With Coarse-Grained Perception Construction and Fine-Grained Interaction Learning 利用粗粒度感知构建和细粒度交互学习进行盲图像质量评估
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-12-28 DOI: 10.1109/TBC.2023.3342696
Bo Hu;Tuoxun Zhao;Jia Zheng;Yan Zhang;Leida Li;Weisheng Li;Xinbo Gao
Image Quality Assessment (IQA) plays an important role in the field of computer vision. However, most of the existing metrics for Blind IQA (BIQA) adopt an end-to-end way and do not adequately simulate the process of human subjective evaluation, which limits further improvements in model performance. In the process of perception, people first give a preliminary impression of the distortion type and relative quality of the images, and then give a specific quality score under the influence of the interaction of the two. Although some methods have attempted to explore the effects of distortion type and relative quality, the relationship between them has been neglected. In this paper, we propose a BIQA with coarse-grained perception construction and fine-grained interaction learning, called PINet for short. The fundamental idea is to learn from the two-stage human perceptual process. Specifically, in the pre-training stage, the backbone initially processes a pair of synthetic distorted images with pseudo-subjective scores, and the multi-scale feature extraction module integrates the deep information and delivers it to the coarse-grained perception construction module, which performs the distortion discrimination and the quality ranking. In the fine-tuning stage, we propose a fine-grained interactive learning module to interact with the two pieces of information to further improve the performance of the proposed PINet. The experimental results prove that the proposed PINet not only achieves competing performances on synthetic distortion datasets but also performs better on authentic distortion datasets.
图像质量评估(IQA)在计算机视觉领域发挥着重要作用。然而,现有的盲 IQA(BIQA)指标大多采用端到端的方式,不能充分模拟人类主观评价的过程,从而限制了模型性能的进一步提高。在感知过程中,人们首先会对图像的畸变类型和相对质量产生初步印象,然后在两者相互作用的影响下给出具体的质量分数。虽然有些方法试图探讨失真类型和相对质量的影响,但它们之间的关系一直被忽视。在本文中,我们提出了一种具有粗粒度感知构建和细粒度交互学习功能的 BIQA,简称 PINet。其基本思想是从两个阶段的人类感知过程中学习。具体来说,在预训练阶段,骨干网初步处理一对带有伪主观评分的合成失真图像,多尺度特征提取模块整合深层信息并将其传递给粗粒度感知构建模块,由其执行失真判别和质量排序。在微调阶段,我们提出了一个细粒度交互学习模块,与这两种信息进行交互,以进一步提高拟议 PINet 的性能。实验结果证明,所提出的 PINet 不仅在合成失真数据集上取得了竞争性的性能,而且在真实失真数据集上也有更好的表现。
{"title":"Blind Image Quality Assessment With Coarse-Grained Perception Construction and Fine-Grained Interaction Learning","authors":"Bo Hu;Tuoxun Zhao;Jia Zheng;Yan Zhang;Leida Li;Weisheng Li;Xinbo Gao","doi":"10.1109/TBC.2023.3342696","DOIUrl":"https://doi.org/10.1109/TBC.2023.3342696","url":null,"abstract":"Image Quality Assessment (IQA) plays an important role in the field of computer vision. However, most of the existing metrics for Blind IQA (BIQA) adopt an end-to-end way and do not adequately simulate the process of human subjective evaluation, which limits further improvements in model performance. In the process of perception, people first give a preliminary impression of the distortion type and relative quality of the images, and then give a specific quality score under the influence of the interaction of the two. Although some methods have attempted to explore the effects of distortion type and relative quality, the relationship between them has been neglected. In this paper, we propose a BIQA with coarse-grained perception construction and fine-grained interaction learning, called PINet for short. The fundamental idea is to learn from the two-stage human perceptual process. Specifically, in the pre-training stage, the backbone initially processes a pair of synthetic distorted images with pseudo-subjective scores, and the multi-scale feature extraction module integrates the deep information and delivers it to the coarse-grained perception construction module, which performs the distortion discrimination and the quality ranking. In the fine-tuning stage, we propose a fine-grained interactive learning module to interact with the two pieces of information to further improve the performance of the proposed PINet. The experimental results prove that the proposed PINet not only achieves competing performances on synthetic distortion datasets but also performs better on authentic distortion datasets.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"533-544"},"PeriodicalIF":4.5,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141292391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Blind Video Quality Assessment Method via Spatiotemporal Pyramid Attention 通过时空金字塔注意力进行盲目视频质量评估的方法
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-12-28 DOI: 10.1109/TBC.2023.3340031
Wenhao Shen;Mingliang Zhou;Xuekai Wei;Heqiang Wang;Bin Fang;Cheng Ji;Xu Zhuang;Jason Wang;Jun Luo;Huayan Pu;Xiaoxu Huang;Shilong Wang;Huajun Cao;Yong Feng;Tao Xiang;Zhaowei Shang
As social media communication develops, reliable multimedia quality evaluation indicators have become a prerequisite for enriching user experience services. In this paper, we propose a multiscale spatiotemporal pyramid attention (SPA) block for constructing a blind video quality assessment (VQA) method to evaluate the perceptual quality of videos. First, we extract motion information from the video frames at different temporal scales to form a feature pyramid, which provides a feature representation with multiple visual perceptions. Second, an SPA module, which can effectively extract multiscale spatiotemporal information at various temporal scales and develop a cross-scale dependency relationship, is proposed. Finally, the quality estimation process is completed by passing the extracted features obtained from a network of multiple stacked spatiotemporal pyramid blocks through a regression network to determine the perceived quality. The experimental results demonstrate that our method is on par with the state-of-the-art approaches. The source code necessary for conducting groundbreaking scientific research is accessible online https://github.com/Land5cape/SPBVQA.
随着社交媒体传播的发展,可靠的多媒体质量评价指标已成为丰富用户体验服务的先决条件。本文提出了一种多尺度时空金字塔关注(SPA)模块,用于构建盲视频质量评估(VQA)方法,评估视频的感知质量。首先,我们从不同时间尺度的视频帧中提取运动信息,形成特征金字塔,从而提供具有多种视觉感知的特征表示。其次,我们提出了一个 SPA 模块,它能有效提取不同时间尺度的多尺度时空信息,并建立跨尺度的依赖关系。最后,将从多个堆叠时空金字塔块网络中提取的特征通过回归网络来确定感知质量,从而完成质量估计过程。实验结果表明,我们的方法与最先进的方法不相上下。进行开创性科学研究所需的源代码可在线访问 https://github.com/Land5cape/SPBVQA。
{"title":"A Blind Video Quality Assessment Method via Spatiotemporal Pyramid Attention","authors":"Wenhao Shen;Mingliang Zhou;Xuekai Wei;Heqiang Wang;Bin Fang;Cheng Ji;Xu Zhuang;Jason Wang;Jun Luo;Huayan Pu;Xiaoxu Huang;Shilong Wang;Huajun Cao;Yong Feng;Tao Xiang;Zhaowei Shang","doi":"10.1109/TBC.2023.3340031","DOIUrl":"https://doi.org/10.1109/TBC.2023.3340031","url":null,"abstract":"As social media communication develops, reliable multimedia quality evaluation indicators have become a prerequisite for enriching user experience services. In this paper, we propose a multiscale spatiotemporal pyramid attention (SPA) block for constructing a blind video quality assessment (VQA) method to evaluate the perceptual quality of videos. First, we extract motion information from the video frames at different temporal scales to form a feature pyramid, which provides a feature representation with multiple visual perceptions. Second, an SPA module, which can effectively extract multiscale spatiotemporal information at various temporal scales and develop a cross-scale dependency relationship, is proposed. Finally, the quality estimation process is completed by passing the extracted features obtained from a network of multiple stacked spatiotemporal pyramid blocks through a regression network to determine the perceived quality. The experimental results demonstrate that our method is on par with the state-of-the-art approaches. The source code necessary for conducting groundbreaking scientific research is accessible online \u0000<uri>https://github.com/Land5cape/SPBVQA</uri>\u0000.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 1","pages":"251-264"},"PeriodicalIF":4.5,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140052981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Co-Channel Signal Interference Elimination Algorithm for Terrestrial-Satellite Communications and Broadcasting 用于地面-卫星通信和广播的数据驱动同信道信号干扰消除算法
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-12-21 DOI: 10.1109/TBC.2023.3340022
Ronghui Zhang;Quan Zhou;Xuesong Qiu;Lijian Xin
As satellite and communication technology advances, terrestrial-satellite communications and broadcasting (TSCB) provide uninterrupted services, meeting the demand for seamless communication and broadcasting interconnection. The evolving TSCB technology faces challenges in handling dynamic time-frequency features of wireless signals. Stable satellite-ground interaction is crucial, as co-channel interference can disrupt communication, causing instability. To address this, the TSCB system needs an effective mechanism to eliminate signal interference. Current methods often overlook complex domain features, resulting in suboptimal outcomes. Leveraging deep learning’s computational power, we introduce WSIE-Net, an encoder-decoder model for TSCB signal interference elimination. The model learns an effective separation matrix for robust separation amidst wireless signal interference, comprehensively capturing orthogonal features. We analyze time-frequency diagrams, bit error rates, and other parameters. Performance assessment involves similarity coefficients and Kullback-Leibler Divergence, comparing the proposed algorithm with common blind separation methods. Results indicate significant progress in signal interference elimination for TSCB.
随着卫星和通信技术的发展,地面-卫星通信和广播(TSCB)提供了不间断的服务,满足了无缝通信和广播互联的需求。不断发展的 TSCB 技术在处理无线信号的动态时频特征方面面临挑战。稳定的星地互动至关重要,因为同信道干扰会破坏通信,造成不稳定。为此,TSCB 系统需要一种有效的机制来消除信号干扰。目前的方法往往会忽略复杂的领域特征,导致结果不理想。利用深度学习的计算能力,我们推出了用于消除 TSCB 信号干扰的编码器-解码器模型 WSIE-Net。该模型能学习有效的分离矩阵,在无线信号干扰中实现稳健分离,全面捕捉正交特征。我们分析了时频图、误码率和其他参数。性能评估涉及相似性系数和库尔贝克-莱布勒发散,并将提出的算法与常见的盲分离方法进行了比较。结果表明,TSCB 在消除信号干扰方面取得了重大进展。
{"title":"Data-Driven Co-Channel Signal Interference Elimination Algorithm for Terrestrial-Satellite Communications and Broadcasting","authors":"Ronghui Zhang;Quan Zhou;Xuesong Qiu;Lijian Xin","doi":"10.1109/TBC.2023.3340022","DOIUrl":"10.1109/TBC.2023.3340022","url":null,"abstract":"As satellite and communication technology advances, terrestrial-satellite communications and broadcasting (TSCB) provide uninterrupted services, meeting the demand for seamless communication and broadcasting interconnection. The evolving TSCB technology faces challenges in handling dynamic time-frequency features of wireless signals. Stable satellite-ground interaction is crucial, as co-channel interference can disrupt communication, causing instability. To address this, the TSCB system needs an effective mechanism to eliminate signal interference. Current methods often overlook complex domain features, resulting in suboptimal outcomes. Leveraging deep learning’s computational power, we introduce WSIE-Net, an encoder-decoder model for TSCB signal interference elimination. The model learns an effective separation matrix for robust separation amidst wireless signal interference, comprehensively capturing orthogonal features. We analyze time-frequency diagrams, bit error rates, and other parameters. Performance assessment involves similarity coefficients and Kullback-Leibler Divergence, comparing the proposed algorithm with common blind separation methods. Results indicate significant progress in signal interference elimination for TSCB.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"1065-1075"},"PeriodicalIF":3.2,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142207657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Broadcasting
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1