首页 > 最新文献

IEEE Transactions on Broadcasting最新文献

英文 中文
Low-Latency VR Video Processing-Transmitting System Based on Edge Computing 基于边缘计算的低延迟 VR 视频处理与传输系统
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-11 DOI: 10.1109/TBC.2024.3380455
Nianzhen Gao;Jiaxi Zhou;Guoan Wan;Xinhai Hua;Ting Bi;Tao Jiang
The widespread use of live streaming necessitates low-latency requirements for the processing and transmission of virtual reality (VR) videos. This paper introduces a prototype system for low-latency VR video processing and transmission that exploits edge computing to harness the computational power of edge servers. This approach enables efficient video preprocessing and facilitates closer-to-user multicast video distribution. Despite edge computing’s potential, managing large-scale access, addressing differentiated channel conditions, and accommodating diverse user viewports pose significant challenges for VR video transcoding and scheduling. To tackle these challenges, our system utilizes dual-edge servers for video transcoding and slicing, thereby markedly improving the viewing experience compared to traditional cloud-based systems. Additionally, we devise a low-complexity greedy algorithm for multi-edge and multi-user VR video offloading distribution, employing the results of bitrate decisions to guide video transcoding inversely. Simulation results reveal that our strategy significantly enhances system utility by 44.77% over existing state-of-the-art schemes that do not utilize edge servers while reducing processing time by 58.54%.
随着实时流媒体的广泛应用,虚拟现实(VR)视频的处理和传输必须满足低延迟要求。本文介绍了一种用于低延迟虚拟现实视频处理和传输的原型系统,该系统利用边缘计算来发挥边缘服务器的计算能力。这种方法可实现高效的视频预处理,并促进更接近用户的组播视频分发。尽管边缘计算潜力巨大,但管理大规模接入、处理不同的信道条件以及适应不同的用户视口都给 VR 视频转码和调度带来了巨大挑战。为了应对这些挑战,我们的系统利用双边缘服务器进行视频转码和切片,从而与传统的云系统相比显著改善了观看体验。此外,我们还为多边缘和多用户 VR 视频卸载分配设计了一种低复杂度的贪婪算法,利用比特率决策结果反向指导视频转码。仿真结果表明,与不利用边缘服务器的现有先进方案相比,我们的策略显著提高了 44.77% 的系统效用,同时减少了 58.54% 的处理时间。
{"title":"Low-Latency VR Video Processing-Transmitting System Based on Edge Computing","authors":"Nianzhen Gao;Jiaxi Zhou;Guoan Wan;Xinhai Hua;Ting Bi;Tao Jiang","doi":"10.1109/TBC.2024.3380455","DOIUrl":"10.1109/TBC.2024.3380455","url":null,"abstract":"The widespread use of live streaming necessitates low-latency requirements for the processing and transmission of virtual reality (VR) videos. This paper introduces a prototype system for low-latency VR video processing and transmission that exploits edge computing to harness the computational power of edge servers. This approach enables efficient video preprocessing and facilitates closer-to-user multicast video distribution. Despite edge computing’s potential, managing large-scale access, addressing differentiated channel conditions, and accommodating diverse user viewports pose significant challenges for VR video transcoding and scheduling. To tackle these challenges, our system utilizes dual-edge servers for video transcoding and slicing, thereby markedly improving the viewing experience compared to traditional cloud-based systems. Additionally, we devise a low-complexity greedy algorithm for multi-edge and multi-user VR video offloading distribution, employing the results of bitrate decisions to guide video transcoding inversely. Simulation results reveal that our strategy significantly enhances system utility by 44.77% over existing state-of-the-art schemes that do not utilize edge servers while reducing processing time by 58.54%.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"862-871"},"PeriodicalIF":3.2,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140578571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Database and Model for the Visual Quality Assessment of Super-Resolution Videos 超分辨率视频视觉质量评估数据库和模型
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-11 DOI: 10.1109/TBC.2024.3382949
Fei Zhou;Wei Sheng;Zitao Lu;Guoping Qiu
Video super-resolution (SR) has important real world applications such as enhancing viewing experiences of legacy low-resolution videos on high resolution display devices. However, there are no visual quality assessment (VQA) models specifically designed for evaluating SR videos while such models are crucially important both for advancing video SR algorithms and for viewing quality assurance. This paper addresses this gap. We start by contributing the first video super-resolution quality assessment database (VSR-QAD) which contains 2,260 SR videos annotated with mean opinion score (MOS) labels collected through an approximately 400 man-hours psychovisual experiment by a total of 190 subjects. We then build on the new VSR-QAD and develop the first VQA model specifically designed for evaluating SR videos. The model features a two-stream convolutional neural network architecture and a two-stage training algorithm designed for extracting spatial and temporal features characterizing the quality of SR videos. We present experimental results and data analysis to demonstrate the high data quality of VSR-QAD and the effectiveness of the new VQA model for measuring the visual quality of SR videos. The new database and the code of the proposed model will be available online at https://github.com/key1cdc/VSRQAD.
视频超分辨率(SR)在现实世界中有着重要的应用,例如在高分辨率显示设备上增强传统低分辨率视频的观看体验。然而,目前还没有专门用于评估 SR 视频的视觉质量评估 (VQA) 模型,而这类模型对于推进视频 SR 算法和保证观看质量都至关重要。本文正是为了弥补这一空白。首先,我们提供了第一个视频超分辨率质量评估数据库(VSR-QAD),该数据库包含 2,260 个 SR 视频,这些视频标注了平均意见分(MOS)标签,这些标签是由 190 名受试者通过约 400 个工时的心理视觉实验收集的。然后,我们以新的 VSR-QAD 为基础,开发了首个专门用于评估 SR 视频的 VQA 模型。该模型采用双流卷积神经网络架构和两阶段训练算法,旨在提取表征 SR 视频质量的空间和时间特征。我们展示了实验结果和数据分析,以证明 VSR-QAD 的高数据质量和新 VQA 模型在测量 SR 视频视觉质量方面的有效性。新数据库和拟议模型的代码将在 https://github.com/key1cdc/VSRQAD 上在线提供。
{"title":"A Database and Model for the Visual Quality Assessment of Super-Resolution Videos","authors":"Fei Zhou;Wei Sheng;Zitao Lu;Guoping Qiu","doi":"10.1109/TBC.2024.3382949","DOIUrl":"10.1109/TBC.2024.3382949","url":null,"abstract":"Video super-resolution (SR) has important real world applications such as enhancing viewing experiences of legacy low-resolution videos on high resolution display devices. However, there are no visual quality assessment (VQA) models specifically designed for evaluating SR videos while such models are crucially important both for advancing video SR algorithms and for viewing quality assurance. This paper addresses this gap. We start by contributing the first video super-resolution quality assessment database (VSR-QAD) which contains 2,260 SR videos annotated with mean opinion score (MOS) labels collected through an approximately 400 man-hours psychovisual experiment by a total of 190 subjects. We then build on the new VSR-QAD and develop the first VQA model specifically designed for evaluating SR videos. The model features a two-stream convolutional neural network architecture and a two-stage training algorithm designed for extracting spatial and temporal features characterizing the quality of SR videos. We present experimental results and data analysis to demonstrate the high data quality of VSR-QAD and the effectiveness of the new VQA model for measuring the visual quality of SR videos. The new database and the code of the proposed model will be available online at \u0000<uri>https://github.com/key1cdc/VSRQAD</uri>\u0000.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"516-532"},"PeriodicalIF":4.5,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140578579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stable Viewport-Based Unsupervised Compressed 360° Video Quality Enhancement 基于稳定视口的无监督压缩 360$^{circ}$ 视频质量增强技术
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-10 DOI: 10.1109/TBC.2024.3380435
Zizhuang Zou;Mao Ye;Xue Li;Luping Ji;Ce Zhu
With the popularity of panoramic cameras and head mount displays, many 360° videos have been recorded. Due to the geometric distortion and boundary discontinuity of 2D projection of 360° video, traditional 2D lossy video compression technology always generates more artifacts. Therefore, it is necessary to enhance the quality of compressed 360° video. However, 360° video characteristics make traditional 2D enhancement models cannot work properly. So the previous work tries to obtain the viewport sequence with smaller geometric distortions for enhancement. But such sequence is difficult to be obtained and the trained enhancement model cannot be well adapted to a new dataset. To address these issues, we propose a Stable viewport-based Unsupervised compressed 360° video Quality Enhancement (SUQE) method. Our method consists of two stages. In the first stage, a new data preparation module is proposed which adopts saliency-based data augmentation and viewport cropping techniques to generate training dataset. A standard 2D enhancement model is trained based on this dataset. For transferring the trained enhancement model to the target dataset, a shift prediction module is designed, which will crop a shifted viewport clip as supervision signal for model adaptation. For the second stage, by comparing the differences between the current enhanced original and shifted frames, the Mean Teacher framework is employed to further fine-tune the enhancement model. Experiment results confirm that our method achieves satisfactory performance on the public dataset. The relevant models and code will be released.
随着全景相机和头戴式显示器的普及,人们录制了许多 360° 视频。由于 360° 视频的二维投影存在几何失真和边界不连续性,传统的二维有损视频压缩技术总是会产生较多的伪影。因此,有必要提高 360° 视频压缩的质量。然而,360° 视频的特性使得传统的 2D 增强模型无法正常工作。因此,之前的工作试图获取几何失真较小的视口序列来进行增强。但这种序列很难获得,而且训练好的增强模型也不能很好地适应新的数据集。为了解决这些问题,我们提出了一种基于稳定视口的无监督压缩 360° 视频质量增强(SUQE)方法。我们的方法包括两个阶段。在第一阶段,我们提出了一个新的数据准备模块,它采用基于显著性的数据增强和视口裁剪技术来生成训练数据集。在此数据集的基础上训练标准的二维增强模型。为了将训练好的增强模型转移到目标数据集,设计了一个移位预测模块,它将裁剪一个移位的视口剪辑作为模型适应的监督信号。在第二阶段,通过比较当前增强的原始帧和移位帧之间的差异,采用平均教师框架来进一步微调增强模型。实验结果证实,我们的方法在公共数据集上取得了令人满意的性能。相关模型和代码即将发布。
{"title":"Stable Viewport-Based Unsupervised Compressed 360° Video Quality Enhancement","authors":"Zizhuang Zou;Mao Ye;Xue Li;Luping Ji;Ce Zhu","doi":"10.1109/TBC.2024.3380435","DOIUrl":"10.1109/TBC.2024.3380435","url":null,"abstract":"With the popularity of panoramic cameras and head mount displays, many 360° videos have been recorded. Due to the geometric distortion and boundary discontinuity of 2D projection of 360° video, traditional 2D lossy video compression technology always generates more artifacts. Therefore, it is necessary to enhance the quality of compressed 360° video. However, 360° video characteristics make traditional 2D enhancement models cannot work properly. So the previous work tries to obtain the viewport sequence with smaller geometric distortions for enhancement. But such sequence is difficult to be obtained and the trained enhancement model cannot be well adapted to a new dataset. To address these issues, we propose a Stable viewport-based Unsupervised compressed 360° video Quality Enhancement (SUQE) method. Our method consists of two stages. In the first stage, a new data preparation module is proposed which adopts saliency-based data augmentation and viewport cropping techniques to generate training dataset. A standard 2D enhancement model is trained based on this dataset. For transferring the trained enhancement model to the target dataset, a shift prediction module is designed, which will crop a shifted viewport clip as supervision signal for model adaptation. For the second stage, by comparing the differences between the current enhanced original and shifted frames, the Mean Teacher framework is employed to further fine-tune the enhancement model. Experiment results confirm that our method achieves satisfactory performance on the public dataset. The relevant models and code will be released.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"607-619"},"PeriodicalIF":4.5,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140578576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth Video Inter Coding Based on Deep Frame Generation 基于深度帧生成的深度视频交互编码
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-01 DOI: 10.1109/TBC.2024.3374103
Ge Li;Jianjun Lei;Zhaoqing Pan;Bo Peng;Nam Ling
Due to the fact that depth video contains large similar smooth content, the depth video frame could be selectively generated at the decoder side without being encoded and transmitted at the encoder side, so as to achieve a significant improvement in coding efficiency. This paper proposes a deep frame generation-based depth video inter coding method to efficiently compress the depth video. To reduce temporal redundancies of the depth video, the proposed method encodes depth key frames and directly generates the reconstruction of depth non-key frames. Moreover, a warping-based frame generation network with boundary awareness (Ba-WFGNet) is designed to generate high-quality depth non-key frames at the decoder side. In the Ba-WFGNet, the temporal correlations among depth frames are utilized to generate the coarse depth non-key frame in a warping manner. Then, considering the boundary quality of depth video has an important impact on view synthesis, a boundary-aware refinement module is designed to further refine the coarse depth non-key frame for high-quality boundaries. The proposed method is implemented into MIV, and experimental results verify that the proposed method achieves superior coding efficiency.
由于深度视频包含大量相似的平滑内容,可以在解码器端选择性地生成深度视频帧,而无需在编码器端进行编码和传输,从而显著提高编码效率。本文提出了一种基于深度帧生成的深度视频交互编码方法,以高效压缩深度视频。为减少深度视频的时序冗余,该方法对深度关键帧进行编码,并直接生成重建深度非关键帧。此外,还设计了一个基于翘曲的边界感知帧生成网络(Ba-WFGNet),用于在解码器端生成高质量的深度非关键帧。在 Ba-WFGNet 中,深度帧之间的时间相关性被用来以翘曲方式生成粗深度非关键帧。然后,考虑到深度视频的边界质量对视图合成有重要影响,设计了一个边界感知细化模块,以进一步细化粗深度非关键帧的高质量边界。将所提出的方法应用到 MIV 中,实验结果验证了所提出的方法具有更高的编码效率。
{"title":"Depth Video Inter Coding Based on Deep Frame Generation","authors":"Ge Li;Jianjun Lei;Zhaoqing Pan;Bo Peng;Nam Ling","doi":"10.1109/TBC.2024.3374103","DOIUrl":"10.1109/TBC.2024.3374103","url":null,"abstract":"Due to the fact that depth video contains large similar smooth content, the depth video frame could be selectively generated at the decoder side without being encoded and transmitted at the encoder side, so as to achieve a significant improvement in coding efficiency. This paper proposes a deep frame generation-based depth video inter coding method to efficiently compress the depth video. To reduce temporal redundancies of the depth video, the proposed method encodes depth key frames and directly generates the reconstruction of depth non-key frames. Moreover, a warping-based frame generation network with boundary awareness (Ba-WFGNet) is designed to generate high-quality depth non-key frames at the decoder side. In the Ba-WFGNet, the temporal correlations among depth frames are utilized to generate the coarse depth non-key frame in a warping manner. Then, considering the boundary quality of depth video has an important impact on view synthesis, a boundary-aware refinement module is designed to further refine the coarse depth non-key frame for high-quality boundaries. The proposed method is implemented into MIV, and experimental results verify that the proposed method achieves superior coding efficiency.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"708-718"},"PeriodicalIF":4.5,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140578578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subjective and Objective Quality Assessment of Multi-Attribute Retouched Face Images 多属性修饰人脸图像的主观和客观质量评估
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-27 DOI: 10.1109/TBC.2024.3374043
Guanghui Yue;Honglv Wu;Weiqing Yan;Tianwei Zhou;Hantao Liu;Wei Zhou
Facial retouching, aiming at enhancing an individual’s appearance digitally, has become popular in many parts of human life, such as personal entertainment, commercial advertising, etc. However, excessive use of facial retouching can affect public aesthetic values and accordingly induce issues of mental health. There is a growing need for comprehensive quality assessment of Retouched Face (RF) images. This paper aims to advance this topic from both subjective and objective studies. Firstly, we generate 2,500 RF images by retouching 250 high-quality face images from multiple attributes (i.e., eyes, nose, mouth, and facial shape) with different photo-editing tools. After that, we carry out a series of subjective experiments to evaluate the quality of multi-attribute RF images from various perspectives, and construct the Multi-Attribute Retouched Face Database (MARFD) with multi-labels. Secondly, considering that retouching alters the facial morphology, we introduce a multi-task learning based No-Reference (NR) Image Quality Assessment (IQA) method, named MTNet. Specifically, to capture high-level semantic information associated with geometric changes, MTNet treats the alteration degree estimation of retouching attributes as auxiliary tasks for the main task (i.e., the overall quality prediction). In addition, inspired by the perceptual effects of viewing distance, MTNet utilizes a multi-scale data augmentation strategy during network training to help the network better understand the distortions. Experimental results on MARFD show that our MTNet correlates well with subjective ratings and outperforms 16 state-of-the-art NR-IQA methods.
以数字方式提升个人外貌为目的的面部修饰已在人类生活的许多领域流行开来,如个人娱乐、商业广告等。然而,过度使用面部修饰会影响公众的审美价值观,并相应地诱发心理健康问题。对修饰脸部(RF)图像进行全面质量评估的需求日益增长。本文旨在从主观和客观研究两方面推进这一课题。首先,我们使用不同的照片编辑工具,从多个属性(即眼睛、鼻子、嘴巴和脸型)对 250 张高质量的人脸图像进行修饰,生成 2,500 张 RF 图像。之后,我们进行了一系列主观实验,从不同角度评估多属性 RF 图像的质量,并构建了多标签的多属性修饰人脸数据库(MARFD)。其次,考虑到修饰会改变面部形态,我们引入了一种基于多任务学习的无参考(NR)图像质量评估(IQA)方法,命名为 MTNet。具体来说,为了捕捉与几何变化相关的高级语义信息,MTNet 将修饰属性的改变程度估计视为主要任务(即整体质量预测)的辅助任务。此外,受观看距离的感知效果启发,MTNet 在网络训练过程中采用了多尺度数据增强策略,以帮助网络更好地理解失真。在 MARFD 上的实验结果表明,我们的 MTNet 与主观评分有很好的相关性,并且优于 16 种最先进的 NR-IQA 方法。
{"title":"Subjective and Objective Quality Assessment of Multi-Attribute Retouched Face Images","authors":"Guanghui Yue;Honglv Wu;Weiqing Yan;Tianwei Zhou;Hantao Liu;Wei Zhou","doi":"10.1109/TBC.2024.3374043","DOIUrl":"10.1109/TBC.2024.3374043","url":null,"abstract":"Facial retouching, aiming at enhancing an individual’s appearance digitally, has become popular in many parts of human life, such as personal entertainment, commercial advertising, etc. However, excessive use of facial retouching can affect public aesthetic values and accordingly induce issues of mental health. There is a growing need for comprehensive quality assessment of Retouched Face (RF) images. This paper aims to advance this topic from both subjective and objective studies. Firstly, we generate 2,500 RF images by retouching 250 high-quality face images from multiple attributes (i.e., eyes, nose, mouth, and facial shape) with different photo-editing tools. After that, we carry out a series of subjective experiments to evaluate the quality of multi-attribute RF images from various perspectives, and construct the Multi-Attribute Retouched Face Database (MARFD) with multi-labels. Secondly, considering that retouching alters the facial morphology, we introduce a multi-task learning based No-Reference (NR) Image Quality Assessment (IQA) method, named MTNet. Specifically, to capture high-level semantic information associated with geometric changes, MTNet treats the alteration degree estimation of retouching attributes as auxiliary tasks for the main task (i.e., the overall quality prediction). In addition, inspired by the perceptual effects of viewing distance, MTNet utilizes a multi-scale data augmentation strategy during network training to help the network better understand the distortions. Experimental results on MARFD show that our MTNet correlates well with subjective ratings and outperforms 16 state-of-the-art NR-IQA methods.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"570-583"},"PeriodicalIF":4.5,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Transform Kernel Selection Based on Frequency Matching and Probability Model for AV1 基于频率匹配和概率模型的 AV1 快速变换核选择
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-26 DOI: 10.1109/TBC.2024.3374078
Zhijian Hao;Heming Sun;Guohao Xu;Jiaming Liu;Xiankui Xiong;Xuanpeng Zhu;Xiaoyang Zeng;Yibo Fan
As a fundamental component of video coding, transform coding concentrates the energy scattered in the spatial domain onto the upper-left region of the frequency domain. This concentration contributes significantly to Rate-Distortion performance improvement when combined with quantization and entropy coding. To better adapt the dynamic characteristics of image content, Alliance for Open Media Video 1 (AV1) introduces multiple transform kernels, which brings substantial coding performance benefits, albeit at the cost of considerably computational complexity. In this paper, we propose a fast transform kernel selection algorithm for AV1 based on frequency matching and probability model to effectively accelerate the coding process with an acceptable level of performance loss. Firstly, the concept of Frequency Matching Factor (FMF) based on cosine similarity is defined for the first time to describe the similarity between the residual block and the primary frequency basis image of the transform kernel. Statistical results demonstrate a clear distribution relationship between FMFs and normalized Rate-Distortion optimization costs (nRDOC). Then, leveraging these distribution characteristics, we establish Gaussian normal probability model of nRDOC for each FMF by characterizing the parameters of the normal model as functions of FMFs, enhancing the normal model’s accuracy and coding performance. Finally, based on the derived normal models, we design a fast selection algorithm with scalability and hardware-friendliness to skip the non-promising transform kernels. Experimental results show that the performance loss of the proposed fast algorithm is 1.15% when 57.66% of the transform kernels are skipped, resulting in a saving of 20.09% encoding time, which is superior to other fast algorithms found in the literature and competitive with the pruning algorithm based on the neural network in the AV1 reference software.
作为视频编码的基本组成部分,变换编码将空间域中分散的能量集中到频域的左上方区域。当与量化和熵编码相结合时,这种集中能极大地改善速率-失真性能。为了更好地适应图像内容的动态特性,开放媒体视频联盟 1(AV1)引入了多重变换内核,这带来了巨大的编码性能优势,尽管代价是相当高的计算复杂度。本文提出了一种基于频率匹配和概率模型的 AV1 快速变换内核选择算法,以在可接受的性能损失水平上有效加速编码过程。首先,本文首次定义了基于余弦相似性的频率匹配系数(FMF)概念,用以描述残差块与变换核的主频基图像之间的相似性。统计结果表明,FMF 与归一化率失真优化成本 (nRDOC) 之间存在明显的分布关系。然后,我们利用这些分布特征,通过将正态模型的参数表征为 FMF 的函数,为每个 FMF 建立了 nRDOC 的高斯正态概率模型,从而提高了正态模型的准确性和编码性能。最后,基于推导出的正则模型,我们设计了一种具有可扩展性和硬件友好性的快速选择算法,以跳过不具潜力的变换内核。实验结果表明,当跳过 57.66% 的变换核时,所提出的快速算法的性能损失为 1.15%,从而节省了 20.09% 的编码时间,优于文献中发现的其他快速算法,与 AV1 参考软件中基于神经网络的剪枝算法相比也具有竞争力。
{"title":"Fast Transform Kernel Selection Based on Frequency Matching and Probability Model for AV1","authors":"Zhijian Hao;Heming Sun;Guohao Xu;Jiaming Liu;Xiankui Xiong;Xuanpeng Zhu;Xiaoyang Zeng;Yibo Fan","doi":"10.1109/TBC.2024.3374078","DOIUrl":"10.1109/TBC.2024.3374078","url":null,"abstract":"As a fundamental component of video coding, transform coding concentrates the energy scattered in the spatial domain onto the upper-left region of the frequency domain. This concentration contributes significantly to Rate-Distortion performance improvement when combined with quantization and entropy coding. To better adapt the dynamic characteristics of image content, Alliance for Open Media Video 1 (AV1) introduces multiple transform kernels, which brings substantial coding performance benefits, albeit at the cost of considerably computational complexity. In this paper, we propose a fast transform kernel selection algorithm for AV1 based on frequency matching and probability model to effectively accelerate the coding process with an acceptable level of performance loss. Firstly, the concept of Frequency Matching Factor (FMF) based on cosine similarity is defined for the first time to describe the similarity between the residual block and the primary frequency basis image of the transform kernel. Statistical results demonstrate a clear distribution relationship between FMFs and normalized Rate-Distortion optimization costs (nRDOC). Then, leveraging these distribution characteristics, we establish Gaussian normal probability model of nRDOC for each FMF by characterizing the parameters of the normal model as functions of FMFs, enhancing the normal model’s accuracy and coding performance. Finally, based on the derived normal models, we design a fast selection algorithm with scalability and hardware-friendliness to skip the non-promising transform kernels. Experimental results show that the performance loss of the proposed fast algorithm is 1.15% when 57.66% of the transform kernels are skipped, resulting in a saving of 20.09% encoding time, which is superior to other fast algorithms found in the literature and competitive with the pruning algorithm based on the neural network in the AV1 reference software.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"693-707"},"PeriodicalIF":4.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diversity Receiver for ATSC 3.0-in-Vehicle: Design and Field Evaluation in Metropolitan SFN 车载 ATSC 3.0 分集接收器:大都市 SFN 中的设计和现场评估
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-26 DOI: 10.1109/TBC.2024.3374061
Sungjun Ahn;Bo-Mi Lim;Sunhyoung Kwon;Sungho Jeon;Xianbin Wang;Sung-Ik Park
This paper demonstrates the feasibility of multi-antenna reception to facilitate mobile broadcasting for vehicular receivers. Starting from a dimension analysis estimating the spatial capacity of automobiles, we confirm multi-antenna embedding as a viable solution for vehicular broadcast receivers. Accordingly, a rolling prototype of an ATSC 3.0 multi-antenna diversity receiver (DivRx) is implemented and repeatedly tested on public roads. The field verification tests in this paper aim to evaluate the performance of DivRx in real broadcast environments, represented by an urban single-frequency network (SFN) with high-power transmissions using ultra-high frequencies. To this end, extensive field trials are drawn in an operating ATSC 3.0 network located in the Seoul Metropolitan Area, South Korea. Public on-air services of 1080p and 4K videos are tested, targeting inter-city journeys and trips in urban centroids, respectively. The mobile reliability gain of DivRx is empirically evaluated in terms of coverage probability and the field strength required for 95% receivability. The results show that leveraging four antennas can achieve 99% coverage of intra-city 4K service in the current network status, deriving 65% more gain over single-antenna systems. It is also exhibited that the signal strength requirement can be reduced by 13 dB or more. In addition to the empirical evaluation, we provide theoretical proofs aligning with the observations.
本文论证了多天线接收的可行性,以促进车载接收器的移动广播。从估算汽车空间容量的维度分析出发,我们确认多天线嵌入是车载广播接收器的可行解决方案。因此,我们实施了 ATSC 3.0 多天线分集接收器(DivRx)的滚动原型,并在公共道路上进行了反复测试。本文中的现场验证测试旨在评估 DivRx 在真实广播环境中的性能,这种广播环境以使用超高频的大功率传输的城市单频网络 (SFN) 为代表。为此,我们在韩国首尔大都会区的一个正在运行的 ATSC 3.0 网络中进行了广泛的现场试验。测试了 1080p 和 4K 视频的公共空中服务,分别针对城际旅行和城市中心区旅行。根据覆盖概率和 95% 接收率所需的场强,对 DivRx 的移动可靠性增益进行了实证评估。结果表明,在当前网络状况下,利用四天线可实现 99% 的市内 4K 服务覆盖率,比单天线系统多 65% 的增益。结果还显示,信号强度要求可降低 13 dB 或更多。除了经验评估,我们还提供了与观测结果一致的理论证明。
{"title":"Diversity Receiver for ATSC 3.0-in-Vehicle: Design and Field Evaluation in Metropolitan SFN","authors":"Sungjun Ahn;Bo-Mi Lim;Sunhyoung Kwon;Sungho Jeon;Xianbin Wang;Sung-Ik Park","doi":"10.1109/TBC.2024.3374061","DOIUrl":"10.1109/TBC.2024.3374061","url":null,"abstract":"This paper demonstrates the feasibility of multi-antenna reception to facilitate mobile broadcasting for vehicular receivers. Starting from a dimension analysis estimating the spatial capacity of automobiles, we confirm multi-antenna embedding as a viable solution for vehicular broadcast receivers. Accordingly, a rolling prototype of an ATSC 3.0 multi-antenna diversity receiver (DivRx) is implemented and repeatedly tested on public roads. The field verification tests in this paper aim to evaluate the performance of DivRx in real broadcast environments, represented by an urban single-frequency network (SFN) with high-power transmissions using ultra-high frequencies. To this end, extensive field trials are drawn in an operating ATSC 3.0 network located in the Seoul Metropolitan Area, South Korea. Public on-air services of 1080p and 4K videos are tested, targeting inter-city journeys and trips in urban centroids, respectively. The mobile reliability gain of DivRx is empirically evaluated in terms of coverage probability and the field strength required for 95% receivability. The results show that leveraging four antennas can achieve 99% coverage of intra-city 4K service in the current network status, deriving 65% more gain over single-antenna systems. It is also exhibited that the signal strength requirement can be reduced by 13 dB or more. In addition to the empirical evaluation, we provide theoretical proofs aligning with the observations.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"367-381"},"PeriodicalIF":4.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Approach for No-Reference Screen Content Video Quality Assessment 用于无参照屏幕内容视频质量评估的深度学习方法
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-26 DOI: 10.1109/TBC.2024.3374042
Ngai-Wing Kwong;Yui-Lam Chan;Sik-Ho Tsang;Ziyin Huang;Kin-Man Lam
Screen content video (SCV) has drawn much more attention than ever during the COVID-19 period and has evolved from a niche to a mainstream due to the recent proliferation of remote offices, online meetings, shared-screen collaboration, and gaming live streaming. Therefore, quality assessments for screen content media are highly demanded to maintain service quality recently. Although many practical natural scene video quality assessment methods have been proposed and achieved promising results, these methods cannot be applied to the screen content video quality assessment (SCVQA) task directly since the content characteristics of SCV are substantially different from natural scene video. Besides, only one no-reference SCVQA (NR-SCVQA) method, which requires handcrafted features, has been proposed in the literature. Therefore, we propose the first deep learning approach explicitly designed for NR-SCVQA. First, a multi-channel convolutional neural network (CNN) model is used to extract spatial quality features of pictorial and textual regions separately. Since there is no human annotated quality for each screen content frame (SCF), the CNN model is pre-trained in a multi-task self-supervised fashion to extract spatial quality feature representation of SCF. Second, we propose a time-distributed CNN transformer model (TCNNT) to further process all SCF spatial quality feature representations of an SCV and learn spatial and temporal features simultaneously so that high-level spatiotemporal features of SCV can be extracted and used to assess the whole SCV quality. Experimental results demonstrate the robustness and validity of our model, which is closely related to human perception.
在 COVID-19 期间,屏幕内容视频(SCV)比以往任何时候都更受关注,由于近年来远程办公、在线会议、共享屏幕协作和游戏直播的普及,它已从一个小众领域发展成为主流领域。因此,屏幕内容媒体的质量评估是近期维持服务质量的高要求。尽管已经提出了许多实用的自然场景视频质量评估方法并取得了可喜的成果,但由于 SCV 的内容特征与自然场景视频有很大不同,因此这些方法无法直接应用于屏幕内容视频质量评估(SCVQA)任务。此外,文献中只提出了一种无参考 SCVQA(NR-SCVQA)方法,该方法需要手工制作特征。因此,我们提出了第一种专为 NR-SCVQA 设计的深度学习方法。首先,使用多通道卷积神经网络(CNN)模型分别提取图像和文本区域的空间质量特征。由于每个屏幕内容帧(SCF)都没有人工标注的质量,因此 CNN 模型采用多任务自监督方式进行预训练,以提取 SCF 的空间质量特征表示。其次,我们提出了一种时间分布式 CNN 变换器模型(TCNNT),以进一步处理 SCV 的所有 SCF 空间质量特征表示,并同时学习空间和时间特征,从而提取 SCV 的高级时空特征,用于评估整个 SCV 质量。实验结果证明了我们的模型与人类感知密切相关,具有鲁棒性和有效性。
{"title":"Deep Learning Approach for No-Reference Screen Content Video Quality Assessment","authors":"Ngai-Wing Kwong;Yui-Lam Chan;Sik-Ho Tsang;Ziyin Huang;Kin-Man Lam","doi":"10.1109/TBC.2024.3374042","DOIUrl":"10.1109/TBC.2024.3374042","url":null,"abstract":"Screen content video (SCV) has drawn much more attention than ever during the COVID-19 period and has evolved from a niche to a mainstream due to the recent proliferation of remote offices, online meetings, shared-screen collaboration, and gaming live streaming. Therefore, quality assessments for screen content media are highly demanded to maintain service quality recently. Although many practical natural scene video quality assessment methods have been proposed and achieved promising results, these methods cannot be applied to the screen content video quality assessment (SCVQA) task directly since the content characteristics of SCV are substantially different from natural scene video. Besides, only one no-reference SCVQA (NR-SCVQA) method, which requires handcrafted features, has been proposed in the literature. Therefore, we propose the first deep learning approach explicitly designed for NR-SCVQA. First, a multi-channel convolutional neural network (CNN) model is used to extract spatial quality features of pictorial and textual regions separately. Since there is no human annotated quality for each screen content frame (SCF), the CNN model is pre-trained in a multi-task self-supervised fashion to extract spatial quality feature representation of SCF. Second, we propose a time-distributed CNN transformer model (TCNNT) to further process all SCF spatial quality feature representations of an SCV and learn spatial and temporal features simultaneously so that high-level spatiotemporal features of SCV can be extracted and used to assess the whole SCV quality. Experimental results demonstrate the robustness and validity of our model, which is closely related to human perception.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"555-569"},"PeriodicalIF":4.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Transportation Management in Marine Internet of Vessels: A 5G Broadcasting-Centric Framework Leveraging Federated Learning 加强海洋船舶互联网的运输管理:利用联盟学习的以 5G 广播为中心的框架
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-22 DOI: 10.1109/TBC.2024.3394289
Desheng Chen;Jiabao Wen;Huiao Dai;Meng Xi;Shuai Xiao;Jiachen Yang
The Maritime Internet of Things (MIoT) consists of offshore equipment such as ships, consoles, and base stations, which are used for maritime information sharing to assist driving decision-making. However, with the increase in the number of MIoT access devices, the risks of information security and data reliability have also significantly increased. In this paper, we describe a maritime Dynamic Ship Federated Information Security Sharing Model (DSF-ISS) in Maritime Internet of Vessels (MIoV) based on maritime 5G broadcasting technology. The main object of this study is to solve the problem of maritime information isolated island under the condition of low communication between maritime ship nodes. In this model, maritime ship nodes cooperation is based on the Contract Network Protocol (CNP), which considers task types, spatial, and temporal distribution of different vessels. We then propose an improved federated learning approach for local dynamic nodes based on maritime 5G broadcasting technology. Moreover, this study designs a proof of membership (PoM) to share local task model information in global blockchain. The results showed that DSF-ISS has a positive effect in maritime transportation work. It effectively realizes the secure sharing of information and protects the privacy of node data.
海事物联网(MIoT)由船舶、控制台和基站等近海设备组成,用于海事信息共享,辅助驾驶决策。然而,随着 MIoT 接入设备数量的增加,信息安全和数据可靠性的风险也大大增加。本文介绍了基于海事 5G 广播技术的海事船舶互联网(MIoV)中的海事动态船舶联盟信息安全共享模型(DSF-ISS)。本研究的主要目标是解决海上船舶节点间通信不畅情况下的海上信息孤岛问题。在该模型中,海事船舶节点合作基于合约网络协议(CNP),该协议考虑了不同船舶的任务类型、空间和时间分布。然后,我们提出了一种基于海事 5G 广播技术的改进型本地动态节点联合学习方法。此外,本研究还设计了一种成员证明(PoM),用于在全球区块链中共享本地任务模型信息。研究结果表明,DSF-ISS 在海上运输工作中具有积极作用。它有效地实现了信息的安全共享,保护了节点数据的隐私。
{"title":"Enhancing Transportation Management in Marine Internet of Vessels: A 5G Broadcasting-Centric Framework Leveraging Federated Learning","authors":"Desheng Chen;Jiabao Wen;Huiao Dai;Meng Xi;Shuai Xiao;Jiachen Yang","doi":"10.1109/TBC.2024.3394289","DOIUrl":"10.1109/TBC.2024.3394289","url":null,"abstract":"The Maritime Internet of Things (MIoT) consists of offshore equipment such as ships, consoles, and base stations, which are used for maritime information sharing to assist driving decision-making. However, with the increase in the number of MIoT access devices, the risks of information security and data reliability have also significantly increased. In this paper, we describe a maritime Dynamic Ship Federated Information Security Sharing Model (DSF-ISS) in Maritime Internet of Vessels (MIoV) based on maritime 5G broadcasting technology. The main object of this study is to solve the problem of maritime information isolated island under the condition of low communication between maritime ship nodes. In this model, maritime ship nodes cooperation is based on the Contract Network Protocol (CNP), which considers task types, spatial, and temporal distribution of different vessels. We then propose an improved federated learning approach for local dynamic nodes based on maritime 5G broadcasting technology. Moreover, this study designs a proof of membership (PoM) to share local task model information in global blockchain. The results showed that DSF-ISS has a positive effect in maritime transportation work. It effectively realizes the secure sharing of information and protects the privacy of node data.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"1091-1103"},"PeriodicalIF":3.2,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141153356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JUST360: Optimizing 360-Degree Video Streaming Systems With Joint Utility JUST360:利用联合实用程序优化 360 度视频流系统
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-21 DOI: 10.1109/TBC.2024.3374066
Zhijun Li;Yumei Wang;Yu Liu;Junjie Li;Ping Zhu
360-degree videos, as a type of media that offers highly immersive experiences, often result in significant bandwidth waste due to incomplete views by users. This places a heavy demand on streaming systems to support high-bandwidth requirements. Recently, tile-based streaming systems combined with viewport prediction have become popular to improve bandwidth efficiency. However, since the viewport prediction is only reliable in the short term, maintaining a long buffer to avoid rebuffering is challenging. We propose JUST360, a joint utility based two-tier 360-degree video streaming system in this paper. To better improve the accuracy of utility evaluation, a utility model incorporates image quality and prediction accuracy is proposed to evaluate the contribution of each tile so that longer buffer and bandwidth efficiency can coexist. The optimal bitrate allocation strategy is determined by using model predictive control (MPC) to dynamically select the tiles according to their characteristics. Experiments show that our method successfully achieves higher PSNR and less rebuffering. Compared with other state-of-the-art methods, our proposed method can outperform the other methods by 3%-20% in terms of QoE.
360 度视频作为一种提供高度沉浸式体验的媒体类型,往往会因用户观看不完整而造成大量带宽浪费。这就对支持高带宽要求的流媒体系统提出了更高的要求。最近,为提高带宽效率,基于磁贴的流媒体系统与视口预测相结合开始流行起来。然而,由于视口预测仅在短期内可靠,因此维持一个较长的缓冲区以避免回弹具有挑战性。我们在本文中提出了基于联合效用的双层 360 度视频流系统 JUST360。为了更好地提高效用评估的准确性,我们提出了一个包含图像质量和预测准确性的效用模型,用于评估每个磁贴的贡献,从而使较长的缓冲区和带宽效率得以共存。通过使用模型预测控制(MPC)来根据瓦片的特性动态选择瓦片,从而确定最佳比特率分配策略。实验表明,我们的方法成功实现了更高的 PSNR 和更少的回波。与其他最先进的方法相比,我们提出的方法在 QoE 方面比其他方法高出 3%-20%。
{"title":"JUST360: Optimizing 360-Degree Video Streaming Systems With Joint Utility","authors":"Zhijun Li;Yumei Wang;Yu Liu;Junjie Li;Ping Zhu","doi":"10.1109/TBC.2024.3374066","DOIUrl":"10.1109/TBC.2024.3374066","url":null,"abstract":"360-degree videos, as a type of media that offers highly immersive experiences, often result in significant bandwidth waste due to incomplete views by users. This places a heavy demand on streaming systems to support high-bandwidth requirements. Recently, tile-based streaming systems combined with viewport prediction have become popular to improve bandwidth efficiency. However, since the viewport prediction is only reliable in the short term, maintaining a long buffer to avoid rebuffering is challenging. We propose JUST360, a joint utility based two-tier 360-degree video streaming system in this paper. To better improve the accuracy of utility evaluation, a utility model incorporates image quality and prediction accuracy is proposed to evaluate the contribution of each tile so that longer buffer and bandwidth efficiency can coexist. The optimal bitrate allocation strategy is determined by using model predictive control (MPC) to dynamically select the tiles according to their characteristics. Experiments show that our method successfully achieves higher PSNR and less rebuffering. Compared with other state-of-the-art methods, our proposed method can outperform the other methods by 3%-20% in terms of QoE.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"468-481"},"PeriodicalIF":4.5,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140197151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Broadcasting
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1