首页 > 最新文献

IEEE Transactions on Broadcasting最新文献

英文 中文
Synergistic Temporal-Spatial User-Aware Viewport Prediction for Optimal Adaptive 360-Degree Video Streaming 用户感知时空协同视口预测,实现最佳自适应 360 度视频流
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-21 DOI: 10.1109/TBC.2024.3374119
Yumei Wang;Junjie Li;Zhijun Li;Simou Shang;Yu Liu
360-degree videos usually require extremely high bandwidth and low latency for wireless transmission, which hinders their popularity. A tile-based viewport adaptive streaming scheme, which involves accurate viewport prediction and optimal bitrate adaptation to maintain user Quality of Experience (QoE) under a bandwidth-constrained network, has been proposed by researchers. However, viewport prediction is error-prone in long-term prediction, and bitrate adaptation schemes may waste bandwidth resources due to failing to consider various aspects of QoE. In this paper, we propose a synergistic temporal-spatial user-aware viewport prediction scheme for optimal adaptive 360-Degree video streaming (SPA360) to tackle these challenges. We use a user-aware viewport prediction mode, which offers a white box solution for Field of View (FoV) prediction. Specially, we employ temporal-spatial fusion for enhanced viewport prediction to minimize prediction errors. Our proposed utility prediction model jointly considers viewport probability distribution and metrics that directly affecting QoE to enable more precise bitrate adaptation. To optimize bitrate adaptation for tiled-based 360-degree video streaming, the problem is formulated as a packet knapsack problem and solved efficiently with a dynamic programming-based algorithm to maximize utility. The SPA360 scheme demonstrates improved performance in terms of both viewport prediction accuracy and bandwidth utilization, and our approach enhances the overall quality and efficiency of adaptive 360-degree video streaming.
360 度视频通常需要极高的带宽和低延迟进行无线传输,这阻碍了它们的普及。研究人员提出了一种基于磁贴的视口自适应流媒体方案,包括精确的视口预测和最佳比特率适应,以在带宽受限的网络条件下保持用户体验质量(QoE)。然而,视口预测在长期预测中容易出错,而比特率适应方案由于没有考虑 QoE 的各个方面,可能会浪费带宽资源。在本文中,我们针对最佳自适应 360 度视频流(SPA360)提出了一种协同时空用户感知视口预测方案,以应对这些挑战。我们采用用户感知视口预测模式,为视场(FoV)预测提供了白盒解决方案。特别是,我们采用了时空融合技术来增强视口预测,以最大限度地减少预测误差。我们提出的实用性预测模型联合考虑了视口概率分布和直接影响 QoE 的指标,以实现更精确的比特率适应。为了优化基于平铺的 360 度视频流的比特率适应性,我们将该问题表述为数据包背包问题,并采用基于动态编程的算法有效地解决了该问题,以实现效用最大化。SPA360 方案在视口预测准确性和带宽利用率方面都表现出了更好的性能,我们的方法提高了自适应 360 度视频流的整体质量和效率。
{"title":"Synergistic Temporal-Spatial User-Aware Viewport Prediction for Optimal Adaptive 360-Degree Video Streaming","authors":"Yumei Wang;Junjie Li;Zhijun Li;Simou Shang;Yu Liu","doi":"10.1109/TBC.2024.3374119","DOIUrl":"10.1109/TBC.2024.3374119","url":null,"abstract":"360-degree videos usually require extremely high bandwidth and low latency for wireless transmission, which hinders their popularity. A tile-based viewport adaptive streaming scheme, which involves accurate viewport prediction and optimal bitrate adaptation to maintain user Quality of Experience (QoE) under a bandwidth-constrained network, has been proposed by researchers. However, viewport prediction is error-prone in long-term prediction, and bitrate adaptation schemes may waste bandwidth resources due to failing to consider various aspects of QoE. In this paper, we propose a synergistic temporal-spatial user-aware viewport prediction scheme for optimal adaptive 360-Degree video streaming (SPA360) to tackle these challenges. We use a user-aware viewport prediction mode, which offers a white box solution for Field of View (FoV) prediction. Specially, we employ temporal-spatial fusion for enhanced viewport prediction to minimize prediction errors. Our proposed utility prediction model jointly considers viewport probability distribution and metrics that directly affecting QoE to enable more precise bitrate adaptation. To optimize bitrate adaptation for tiled-based 360-degree video streaming, the problem is formulated as a packet knapsack problem and solved efficiently with a dynamic programming-based algorithm to maximize utility. The SPA360 scheme demonstrates improved performance in terms of both viewport prediction accuracy and bandwidth utilization, and our approach enhances the overall quality and efficiency of adaptive 360-degree video streaming.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"453-467"},"PeriodicalIF":4.5,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140197311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No-Reference Multi-Level Video Quality Assessment Metric for 3D-Synthesized Videos 三维合成视频的无参考多级视频质量评估指标
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-21 DOI: 10.1109/TBC.2024.3396696
Guangcheng Wang;Baojin Huang;Ke Gu;Yuchen Liu;Hongyan Liu;Quan Shi;Guangtao Zhai;Wenjun Zhang
The visual quality of 3D-synthesized videos is closely related to the development and broadcasting of immersive media such as free-viewpoint videos and six degrees of freedom navigation. Therefore, studying the 3D-Synthesized video quality assessment is helpful to promote the popularity of immersive media applications. Inspired by the texture compression, depth compression and virtual view synthesis polluting the visual quality of 3D-synthesized videos at pixel-, structure- and content-levels, this paper proposes a Multi-Level 3D-Synthesized Video Quality Assessment algorithm, namely ML-SVQA, which consists of a quality feature perception module and a quality feature regression module. Specifically, the quality feature perception module firstly extracts motion vector fields of the 3D-synthesized video at pixel-, structure- and content-levels by combining the perception mechanism of human visual system. Then, the quality feature perception module measures the temporal flicker distortion intensity in the no-reference environment by calculating the self-similarity of adjacent motion vector fields. Finally, the quality feature regression module uses the machine learning algorithm to learn the mapping of the developed quality features to the quality score. Experiments constructed on the public IRCCyN/IVC and SIAT synthesized video datasets show that our ML-SVQA is more effective than state-of-the-art image/video quality assessment methods in evaluating the quality of 3D-Synthesized videos.
三维合成视频的视觉质量与自由视点视频和六自由度导航等沉浸式媒体的开发和播放密切相关。因此,研究三维合成视频质量评估有助于促进身临其境媒体应用的普及。受纹理压缩、深度压缩和虚拟视图合成在像素级、结构级和内容级污染三维合成视频视觉质量的启发,本文提出了一种多级三维合成视频质量评估算法,即 ML-SVQA,该算法由质量特征感知模块和质量特征回归模块组成。具体来说,质量特征感知模块首先结合人类视觉系统的感知机制,从像素、结构和内容三个层面提取三维合成视频的运动矢量场。然后,质量特征感知模块通过计算相邻运动矢量场的自相似性来测量无参照环境下的时间闪烁失真强度。最后,质量特征回归模块使用机器学习算法来学习所开发的质量特征与质量得分之间的映射关系。在公开的 IRCCyN/IVC 和 SIAT 合成视频数据集上构建的实验表明,在评估 3D 合成视频质量方面,我们的 ML-SVQA 比最先进的图像/视频质量评估方法更有效。
{"title":"No-Reference Multi-Level Video Quality Assessment Metric for 3D-Synthesized Videos","authors":"Guangcheng Wang;Baojin Huang;Ke Gu;Yuchen Liu;Hongyan Liu;Quan Shi;Guangtao Zhai;Wenjun Zhang","doi":"10.1109/TBC.2024.3396696","DOIUrl":"10.1109/TBC.2024.3396696","url":null,"abstract":"The visual quality of 3D-synthesized videos is closely related to the development and broadcasting of immersive media such as free-viewpoint videos and six degrees of freedom navigation. Therefore, studying the 3D-Synthesized video quality assessment is helpful to promote the popularity of immersive media applications. Inspired by the texture compression, depth compression and virtual view synthesis polluting the visual quality of 3D-synthesized videos at pixel-, structure- and content-levels, this paper proposes a Multi-Level 3D-Synthesized Video Quality Assessment algorithm, namely ML-SVQA, which consists of a quality feature perception module and a quality feature regression module. Specifically, the quality feature perception module firstly extracts motion vector fields of the 3D-synthesized video at pixel-, structure- and content-levels by combining the perception mechanism of human visual system. Then, the quality feature perception module measures the temporal flicker distortion intensity in the no-reference environment by calculating the self-similarity of adjacent motion vector fields. Finally, the quality feature regression module uses the machine learning algorithm to learn the mapping of the developed quality features to the quality score. Experiments constructed on the public IRCCyN/IVC and SIAT synthesized video datasets show that our ML-SVQA is more effective than state-of-the-art image/video quality assessment methods in evaluating the quality of 3D-Synthesized videos.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"584-596"},"PeriodicalIF":4.5,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141153267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Compressed Video Super-Resolution With Guidance of Coding Priors 利用编码先验的指导实现深度压缩视频超分辨率
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-21 DOI: 10.1109/TBC.2024.3394291
Qiang Zhu;Feiyu Chen;Yu Liu;Shuyuan Zhu;Bing Zeng
Compressed video super-resolution (VSR) is employed to generate high-resolution (HR) videos from low-resolution (LR) compressed videos. Recently, some compressed VSR methods have adopted coding priors, such as partition maps, compressed residual frames, predictive pictures and motion vectors, to generate HR videos. However, these methods disregard the design of modules according to the specific characteristics of coding information, which limits the application efficiency of coding priors. In this paper, we propose a deep compressed VSR network that effectively introduces coding priors to construct high-quality HR videos. Specifically, we design a partition-guided feature extraction module to extract features from the LR video with the guidance of the partition average image. Moreover, we separate the video features into sparse features and dense features according to the energy distribution of the compressed residual frame to achieve feature enhancement. Additionally, we construct a temporal attention-based feature fusion module to use motion vectors and predictive pictures to eliminate motion errors between frames and temporally fuse features. Based on these modules, the coding priors are effectively employed in our model for constructing high-quality HR videos. The experimental results demonstrate that our method achieves better performance and lower complexity than the state-of-the-arts.
压缩视频超分辨率(VSR)用于从低分辨率(LR)压缩视频生成高分辨率(HR)视频。最近,一些压缩 VSR 方法采用了编码先验(如分区图、压缩残留帧、预测图片和运动向量)来生成高分辨率视频。然而,这些方法忽略了根据编码信息的具体特点设计模块,限制了编码前置的应用效率。在本文中,我们提出了一种深度压缩 VSR 网络,它能有效地引入编码先验来构建高质量的 HR 视频。具体来说,我们设计了一个分区引导的特征提取模块,在分区平均图像的引导下从 LR 视频中提取特征。此外,我们还根据压缩残留帧的能量分布,将视频特征分为稀疏特征和密集特征,以实现特征增强。此外,我们还构建了基于时间注意力的特征融合模块,利用运动向量和预测图片消除帧间的运动误差,并对特征进行时间融合。在这些模块的基础上,我们的模型有效地利用了编码先验来构建高质量的 HR 视频。实验结果表明,我们的方法比现有技术取得了更好的性能和更低的复杂度。
{"title":"Deep Compressed Video Super-Resolution With Guidance of Coding Priors","authors":"Qiang Zhu;Feiyu Chen;Yu Liu;Shuyuan Zhu;Bing Zeng","doi":"10.1109/TBC.2024.3394291","DOIUrl":"10.1109/TBC.2024.3394291","url":null,"abstract":"Compressed video super-resolution (VSR) is employed to generate high-resolution (HR) videos from low-resolution (LR) compressed videos. Recently, some compressed VSR methods have adopted coding priors, such as partition maps, compressed residual frames, predictive pictures and motion vectors, to generate HR videos. However, these methods disregard the design of modules according to the specific characteristics of coding information, which limits the application efficiency of coding priors. In this paper, we propose a deep compressed VSR network that effectively introduces coding priors to construct high-quality HR videos. Specifically, we design a partition-guided feature extraction module to extract features from the LR video with the guidance of the partition average image. Moreover, we separate the video features into sparse features and dense features according to the energy distribution of the compressed residual frame to achieve feature enhancement. Additionally, we construct a temporal attention-based feature fusion module to use motion vectors and predictive pictures to eliminate motion errors between frames and temporally fuse features. Based on these modules, the coding priors are effectively employed in our model for constructing high-quality HR videos. The experimental results demonstrate that our method achieves better performance and lower complexity than the state-of-the-arts.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"505-515"},"PeriodicalIF":4.5,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141153653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACDMSR: Accelerated Conditional Diffusion Models for Single Image Super-Resolution ACDMSR:用于单图像超级分辨率的加速条件扩散模型
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-21 DOI: 10.1109/TBC.2024.3374122
Axi Niu;Trung X. Pham;Kang Zhang;Jinqiu Sun;Yu Zhu;Qingsen Yan;In So Kweon;Yanning Zhang
Diffusion models have gained significant popularity for image-to-image translation tasks. Previous efforts applying diffusion models to image super-resolution have demonstrated that iteratively refining pure Gaussian noise using a U-Net architecture trained on denoising at various noise levels can yield satisfactory high-resolution images from low-resolution inputs. However, this iterative refinement process comes with the drawback of low inference speed, which strongly limits its applications. To speed up inference and further enhance the performance, our research revisits diffusion models in image super-resolution and proposes a straightforward yet significant diffusion model-based super-resolution method called ACDMSR (accelerated conditional diffusion model for image super-resolution). Specifically, we adopt existing image super-resolution methods and finetune them to provide conditional images from given low-resolution images, which can help to achieve better high-resolution results than just taking low-resolution images as conditional images. Then we adapt the diffusion model to perform super-resolution through a deterministic iterative denoising process, which helps to strongly decline the inference time. We demonstrate that our method surpasses previous attempts in qualitative and quantitative results through extensive experiments conducted on benchmark datasets such as Set5, Set14, Urban100, BSD100, and Manga109. Moreover, our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
扩散模型在图像到图像的转换任务中大受欢迎。之前将扩散模型应用于图像超分辨率的研究表明,使用在不同噪声水平下进行去噪训练的 U-Net 架构对纯高斯噪声进行迭代细化,可以从低分辨率输入生成令人满意的高分辨率图像。然而,这种迭代细化过程存在推理速度低的缺点,这极大地限制了它的应用。为了加快推理速度并进一步提高性能,我们的研究重新审视了图像超分辨率中的扩散模型,并提出了一种简单但意义重大的基于扩散模型的超分辨率方法,即 ACDMSR(用于图像超分辨率的加速条件扩散模型)。具体来说,我们采用了现有的图像超分辨率方法,并对其进行了微调,以从给定的低分辨率图像中提供条件图像,这有助于获得比仅将低分辨率图像作为条件图像更好的高分辨率结果。然后,我们调整扩散模型,通过确定性迭代去噪过程来执行超分辨率,这有助于大大减少推理时间。我们在 Set5、Set14、Urban100、BSD100 和 Manga109 等基准数据集上进行了大量实验,证明我们的方法在定性和定量结果上都超越了之前的尝试。此外,我们的方法还能为低分辨率图像生成视觉上更逼真的对应图像,从而强调了它在实际应用场景中的有效性。
{"title":"ACDMSR: Accelerated Conditional Diffusion Models for Single Image Super-Resolution","authors":"Axi Niu;Trung X. Pham;Kang Zhang;Jinqiu Sun;Yu Zhu;Qingsen Yan;In So Kweon;Yanning Zhang","doi":"10.1109/TBC.2024.3374122","DOIUrl":"10.1109/TBC.2024.3374122","url":null,"abstract":"Diffusion models have gained significant popularity for image-to-image translation tasks. Previous efforts applying diffusion models to image super-resolution have demonstrated that iteratively refining pure Gaussian noise using a U-Net architecture trained on denoising at various noise levels can yield satisfactory high-resolution images from low-resolution inputs. However, this iterative refinement process comes with the drawback of low inference speed, which strongly limits its applications. To speed up inference and further enhance the performance, our research revisits diffusion models in image super-resolution and proposes a straightforward yet significant diffusion model-based super-resolution method called ACDMSR (accelerated conditional diffusion model for image super-resolution). Specifically, we adopt existing image super-resolution methods and finetune them to provide conditional images from given low-resolution images, which can help to achieve better high-resolution results than just taking low-resolution images as conditional images. Then we adapt the diffusion model to perform super-resolution through a deterministic iterative denoising process, which helps to strongly decline the inference time. We demonstrate that our method surpasses previous attempts in qualitative and quantitative results through extensive experiments conducted on benchmark datasets such as Set5, Set14, Urban100, BSD100, and Manga109. Moreover, our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"492-504"},"PeriodicalIF":4.5,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140205610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Accurate Network Dynamics for Enhanced Adaptive Video Streaming 学习准确的网络动态以增强自适应视频流
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-17 DOI: 10.1109/TBC.2024.3396698
Jiaoyang Yin;Hao Chen;Yiling Xu;Zhan Ma;Xiaozhong Xu
The adaptive bitrate (ABR) algorithm plays a crucial role in ensuring satisfactory quality of experience (QoE) in video streaming applications. Most existing approaches, either rule-based or learning-driven, tend to conduct ABR decisions based on limited network statistics, e.g., mean/standard deviation of recent throughput measurements. However, all of them lack a good understanding of network dynamics given the varying network conditions from time to time, leading to compromised performance, especially when the network condition changes significantly. In this paper, we propose a framework named ANT that aims to enhance adaptive video streaming by accurately learning network dynamics. ANT represents and detects specific network conditions by characterizing the entire spectrum of network fluctuations. It further trains multiple dedicated ABR models for each condition using deep reinforcement learning. During inference, a dynamic switching mechanism is devised to activate the appropriate ABR model based on real-time network condition sensing, enabling ANT to automatically adjust its control policies to different network conditions. Extensive experimental results demonstrate that our proposed ANT achieves a significant improvement in user QoE of 20.8%-41.2% in the video-on-demand scenario and 67.4%-134.5% in the live-streaming scenario compared to state-of-the-art methods, across a wide range of network conditions.
在视频流应用中,自适应比特率(ABR)算法对确保令人满意的体验质量(QoE)起着至关重要的作用。大多数现有方法,无论是基于规则的还是学习驱动的,都倾向于根据有限的网络统计数据(如最近吞吐量测量的平均值/标准偏差)做出 ABR 决定。然而,所有这些方法都缺乏对网络动态的充分了解,因为网络条件时常变化,导致性能受损,尤其是当网络条件发生重大变化时。在本文中,我们提出了一个名为 ANT 的框架,旨在通过准确学习网络动态来增强自适应视频流。ANT 通过描述整个网络波动频谱来表示和检测特定的网络条件。它还利用深度强化学习为每种情况训练多个专用 ABR 模型。在推理过程中,我们设计了一种动态切换机制,根据实时网络状况感知激活适当的 ABR 模型,使 ANT 能够根据不同的网络状况自动调整其控制策略。广泛的实验结果表明,与最先进的方法相比,在各种网络条件下,我们提出的 ANT 在视频点播场景中显著改善了用户 QoE,改善幅度为 20.8%-41.2%,在直播场景中改善幅度为 67.4%-134.5%。
{"title":"Learning Accurate Network Dynamics for Enhanced Adaptive Video Streaming","authors":"Jiaoyang Yin;Hao Chen;Yiling Xu;Zhan Ma;Xiaozhong Xu","doi":"10.1109/TBC.2024.3396698","DOIUrl":"10.1109/TBC.2024.3396698","url":null,"abstract":"The adaptive bitrate (ABR) algorithm plays a crucial role in ensuring satisfactory quality of experience (QoE) in video streaming applications. Most existing approaches, either rule-based or learning-driven, tend to conduct ABR decisions based on limited network statistics, e.g., mean/standard deviation of recent throughput measurements. However, all of them lack a good understanding of network dynamics given the varying network conditions from time to time, leading to compromised performance, especially when the network condition changes significantly. In this paper, we propose a framework named ANT that aims to enhance adaptive video streaming by accurately learning network dynamics. ANT represents and detects specific network conditions by characterizing the entire spectrum of network fluctuations. It further trains multiple dedicated ABR models for each condition using deep reinforcement learning. During inference, a dynamic switching mechanism is devised to activate the appropriate ABR model based on real-time network condition sensing, enabling ANT to automatically adjust its control policies to different network conditions. Extensive experimental results demonstrate that our proposed ANT achieves a significant improvement in user QoE of 20.8%-41.2% in the video-on-demand scenario and 67.4%-134.5% in the live-streaming scenario compared to state-of-the-art methods, across a wide range of network conditions.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"808-821"},"PeriodicalIF":3.2,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDPC-Coded LDM Systems Employing Non-Uniform Injection Level for Combining Broadcast and Multicast/Unicast Services 采用非均匀注入水平的 LDPC-Coded LDM 系统,用于组合广播和多播/联播服务
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-16 DOI: 10.1109/TBC.2024.3394296
Hao Ju;Yin Xu;Ruiqi Liu;Dazhi He;Sungjun Ahn;Namho Hur;Sung-Ik Park;Wenjun Zhang;Yiyan Wu
Layered Division Multiplexing (LDM) is a Power-based Non-Orthogonal Multiplexing (P-NOM) technique that has been implemented in the Advanced Television System Committee (ATSC) 3.0 terrestrial TV physical layer to effectively multiplex services with different robustness and data rate requirements. As communication systems quickly evolve, the services to be delivered are becoming more diverse and versatile. Up to now, the LDM system adopted in the terrestrial TV system uses a uniform injection level for the lower-level (or Layer 2) signal injection. This paper investigates the non-uniform injection level LDM (NULDM). The proposed technique can explore the Unequal Error Protection (UEP) property of Low-Density Parity-Check (LDPC) codes and the flexible power allocation nature of the NULDM to improve the system performance and spectrum efficiency. NULDM enables the seamless integration of broadcast/multicast and unicast services in one RF channel, where the unicast signal can assign different resources (power, frequency, and time) based on the UE distance and service requirements. Meanwhile, more power could be allocated to improve the upper layer (or Layer 1) broadcast and datacast services. To make better use of the UEP property of LDPC codes in NULDM, the extended Gaussian mixture approximation (EGMA) method is used to design bit interleaving patterns. Additionally, inspired by the channel order of polar codes, this paper proposes an LDPC sub-block interleaving order (SBIO) scheme that performs similarly to the EGMA interleaving model, while better adapting to the diverse needs of proposed mixed service delivery scenarios for convergence of broadband wireless communications and broadcasting systems.
分层多路复用(LDM)是一种基于功率的非正交多路复用(P-NOM)技术,已在先进电视系统委员会(ATSC)3.0 地面电视物理层中实施,以有效地复用具有不同稳健性和数据速率要求的服务。随着通信系统的快速发展,所要提供的服务也越来越多样化和多功能化。迄今为止,地面电视系统采用的 LDM 系统对底层(或第 2 层)信号注入采用统一的注入电平。本文研究了非均匀注入电平 LDM(NULDM)。所提出的技术可利用低密度奇偶校验码(LDPC)的不等差保护(UEP)特性和 NULDM 灵活的功率分配特性来提高系统性能和频谱效率。NULDM 可在一个射频信道中无缝集成广播/多播和单播服务,其中单播信号可根据 UE 的距离和服务要求分配不同的资源(功率、频率和时间)。同时,还可分配更多功率,以改善上层(或第一层)广播和数据广播服务。为了在 NULDM 中更好地利用 LDPC 码的 UEP 特性,使用了扩展高斯混合近似(EGMA)方法来设计比特交错模式。此外,受极性编码信道顺序的启发,本文提出了一种 LDPC 子块交错顺序(SBIO)方案,其性能与 EGMA 交错模式类似,同时能更好地适应宽带无线通信和广播系统融合中混合服务交付场景的不同需求。
{"title":"LDPC-Coded LDM Systems Employing Non-Uniform Injection Level for Combining Broadcast and Multicast/Unicast Services","authors":"Hao Ju;Yin Xu;Ruiqi Liu;Dazhi He;Sungjun Ahn;Namho Hur;Sung-Ik Park;Wenjun Zhang;Yiyan Wu","doi":"10.1109/TBC.2024.3394296","DOIUrl":"10.1109/TBC.2024.3394296","url":null,"abstract":"Layered Division Multiplexing (LDM) is a Power-based Non-Orthogonal Multiplexing (P-NOM) technique that has been implemented in the Advanced Television System Committee (ATSC) 3.0 terrestrial TV physical layer to effectively multiplex services with different robustness and data rate requirements. As communication systems quickly evolve, the services to be delivered are becoming more diverse and versatile. Up to now, the LDM system adopted in the terrestrial TV system uses a uniform injection level for the lower-level (or Layer 2) signal injection. This paper investigates the non-uniform injection level LDM (NULDM). The proposed technique can explore the Unequal Error Protection (UEP) property of Low-Density Parity-Check (LDPC) codes and the flexible power allocation nature of the NULDM to improve the system performance and spectrum efficiency. NULDM enables the seamless integration of broadcast/multicast and unicast services in one RF channel, where the unicast signal can assign different resources (power, frequency, and time) based on the UE distance and service requirements. Meanwhile, more power could be allocated to improve the upper layer (or Layer 1) broadcast and datacast services. To make better use of the UEP property of LDPC codes in NULDM, the extended Gaussian mixture approximation (EGMA) method is used to design bit interleaving patterns. Additionally, inspired by the channel order of polar codes, this paper proposes an LDPC sub-block interleaving order (SBIO) scheme that performs similarly to the EGMA interleaving model, while better adapting to the diverse needs of proposed mixed service delivery scenarios for convergence of broadband wireless communications and broadcasting systems.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"1032-1043"},"PeriodicalIF":3.2,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Innovative Adaptive Web-Based Solution for Improved Remote Co-Creation and Delivery of Artistic Performances 改进艺术表演远程共同创作和交付的创新型自适应网络解决方案
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-13 DOI: 10.1109/TBC.2024.3363455
Mohammed Amine Togou;Anderson Augusto Simiscuka;Rohit Verma;Noel E. O’Connor;Iñigo Tamayo;Stefano Masneri;Mikel Zorrilla;Gabriel-Miro Muntean
Due to the COVID-19 pandemic, most arts and cultural activities have moved online. This has contributed to the surge in development of artistic tools that enable professional artists to produce engaging and immersive shows remotely. This article introduces TRACTION Co-Creation Stage (TCS), a novel Web-based solution, designed and developed in the context of the EU Horizon 2020 TRACTION project, which allows for remote creation and delivery of artistic shows. TCS supports multiple artists performing simultaneously, either live or pre-recorded, on multiple stages at different geographical locations. It employs a client-server approach. The client has two major components: Control and Display. The former is used by the production teams to create shows by specifying layouts, scenes, and media sources to be included. The latter is used by viewers to watch the various shows. To ensure viewers’ good quality of experience (QoE) levels, TCS employs adaptive streaming based on a novel Prioritised Adaptation solution based on the DASH standard for pre-recorded content delivery (PADA), which is introduced in this paper. User tests and experiments are carried out to evaluate the performance of TCS’ Control and Display applications and that of PADA algorithm when creating and distributing opera shows.
由于 COVID-19 的流行,大多数艺术和文化活动都转移到了网上。这推动了艺术工具的迅猛发展,使专业艺术家能够远程制作引人入胜、身临其境的节目。本文介绍了 "TRACTION 协同创作舞台"(TCS),这是一种基于网络的新型解决方案,由欧盟地平线 2020 TRACTION 项目设计和开发,可实现艺术表演的远程创作和交付。TCS 支持多位艺术家在不同地理位置的多个舞台上同时进行现场或预录制表演。它采用客户端-服务器方式。客户端有两个主要组件:控制和显示。前者供制作团队使用,通过指定布局、场景和媒体资源来制作节目。后者供观众观看各种节目。为确保观众获得良好的体验质量(QoE),TCS 采用了自适应流媒体技术,该技术基于本文介绍的预录内容传输 DASH 标准(PADA),是一种新颖的优先自适应解决方案。通过用户测试和实验,评估了 TCS 的控制和显示应用程序以及 PADA 算法在创建和分发歌剧节目时的性能。
{"title":"An Innovative Adaptive Web-Based Solution for Improved Remote Co-Creation and Delivery of Artistic Performances","authors":"Mohammed Amine Togou;Anderson Augusto Simiscuka;Rohit Verma;Noel E. O’Connor;Iñigo Tamayo;Stefano Masneri;Mikel Zorrilla;Gabriel-Miro Muntean","doi":"10.1109/TBC.2024.3363455","DOIUrl":"10.1109/TBC.2024.3363455","url":null,"abstract":"Due to the COVID-19 pandemic, most arts and cultural activities have moved online. This has contributed to the surge in development of artistic tools that enable professional artists to produce engaging and immersive shows remotely. This article introduces TRACTION Co-Creation Stage (TCS), a novel Web-based solution, designed and developed in the context of the EU Horizon 2020 TRACTION project, which allows for remote creation and delivery of artistic shows. TCS supports multiple artists performing simultaneously, either live or pre-recorded, on multiple stages at different geographical locations. It employs a client-server approach. The client has two major components: Control and Display. The former is used by the production teams to create shows by specifying layouts, scenes, and media sources to be included. The latter is used by viewers to watch the various shows. To ensure viewers’ good quality of experience (QoE) levels, TCS employs adaptive streaming based on a novel Prioritised Adaptation solution based on the DASH standard for pre-recorded content delivery (PADA), which is introduced in this paper. User tests and experiments are carried out to evaluate the performance of TCS’ Control and Display applications and that of PADA algorithm when creating and distributing opera shows.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"719-730"},"PeriodicalIF":4.5,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10472407","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140125226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-Learning-Based Classifier With Custom Feature-Extraction Layers for Digitally Modulated Signals 基于深度学习的分类器,具有针对数字调制信号的自定义特征提取层
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-10 DOI: 10.1109/TBC.2024.3391056
John A. Snoap;Dimitrie C. Popescu;Chad M. Spooner
The paper presents a novel deep-learning (DL) based classifier for digitally modulated signals that uses a capsule network (CAP) with custom-designed feature extraction layers. The classifier takes the in-phase/quadrature (I/Q) components of the digitally modulated signal as input, and the feature extraction layers are inspired by cyclostationary signal processing (CSP) techniques, which extract the cyclic cumulant (CC) features that are employed by conventional CSP-based approaches to blind modulation classification and signal identification. Specifically, the feature extraction layers implement a proxy of the mathematical functions used in the calculation of the CC features and include a squaring layer, a raise-to-the-power-of-three layer, and a fast-Fourier-transform (FFT) layer, along with additional normalization and warping layers to ensure that the relative signal powers are retained and to prevent the trainable neural network (NN) layers from diverging in the training process. The classification performance and the generalization abilities of the proposed CAP are tested using two distinct datasets that contain similar classes of digitally modulated signals but that have been generated independently, and numerical results obtained reveal that the proposed CAP with novel feature extraction layers achieves high classification accuracy while also outperforming alternative DL-based approaches for signal classification in terms of both classification accuracy and generalization abilities.
本文介绍了一种基于深度学习(DL)的新型数字调制信号分类器,该分类器使用带有定制设计特征提取层的胶囊网络(CAP)。该分类器将数字调制信号的同相/正交(I/Q)分量作为输入,而特征提取层则受到环静止信号处理(CSP)技术的启发,该技术可提取循环累积(CC)特征,这些特征被传统的基于 CSP 的方法用于盲调制分类和信号识别。具体来说,特征提取层实现了用于计算 CC 特征的数学函数的代理,包括一个平方层、一个三倍功率层和一个快速傅里叶变换(FFT)层,以及额外的归一化和翘曲层,以确保保留相对信号功率,并防止可训练神经网络(NN)层在训练过程中发散。使用两个不同的数据集测试了所提出的 CAP 的分类性能和泛化能力,这两个数据集包含类似类别的数字调制信号,但都是独立生成的。数值结果表明,所提出的 CAP 连同新颖的特征提取层实现了较高的分类精度,同时在分类精度和泛化能力方面也优于其他基于 DL 的信号分类方法。
{"title":"Deep-Learning-Based Classifier With Custom Feature-Extraction Layers for Digitally Modulated Signals","authors":"John A. Snoap;Dimitrie C. Popescu;Chad M. Spooner","doi":"10.1109/TBC.2024.3391056","DOIUrl":"10.1109/TBC.2024.3391056","url":null,"abstract":"The paper presents a novel deep-learning (DL) based classifier for digitally modulated signals that uses a capsule network (CAP) with custom-designed feature extraction layers. The classifier takes the in-phase/quadrature (I/Q) components of the digitally modulated signal as input, and the feature extraction layers are inspired by cyclostationary signal processing (CSP) techniques, which extract the cyclic cumulant (CC) features that are employed by conventional CSP-based approaches to blind modulation classification and signal identification. Specifically, the feature extraction layers implement a proxy of the mathematical functions used in the calculation of the CC features and include a squaring layer, a raise-to-the-power-of-three layer, and a fast-Fourier-transform (FFT) layer, along with additional normalization and warping layers to ensure that the relative signal powers are retained and to prevent the trainable neural network (NN) layers from diverging in the training process. The classification performance and the generalization abilities of the proposed CAP are tested using two distinct datasets that contain similar classes of digitally modulated signals but that have been generated independently, and numerical results obtained reveal that the proposed CAP with novel feature extraction layers achieves high classification accuracy while also outperforming alternative DL-based approaches for signal classification in terms of both classification accuracy and generalization abilities.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"763-773"},"PeriodicalIF":3.2,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140934724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Removing Banding Artifacts in HDR Videos Generated From Inverse Tone Mapping 消除由反色调映射生成的 HDR 视频中的带状伪影
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-10 DOI: 10.1109/TBC.2024.3394297
Fei Zhou;Zikang Zheng;Guoping Qiu
Displaying standard dynamic range (SDR) videos on high dynamic range (HDR) devices requires inverse tone mapping (ITM). However, such mapping can introduce banding artifacts. This paper presents a banding removal method for inversely tone mapped HDR videos based on deep convolutional neural networks (DCNNs) and adaptive filtering. Three banding relevant feature maps are first extracted and then fed to two DCNNs, a ShapeNet and a PositionNet. The PositionNet learns a soft mask indicating the locations where banding is likely to have occurred and filtering is required while the ShapeNet predicts the filter shapes appropriate for different locations. An advantage of the method is that the adaptive filters can be jointly optimized with a learning-based ITM algorithm for creating high-quality HDR videos. Experimental results show that our method outperforms state-of-the-art algorithms qualitatively and quantitatively.
在高动态范围(HDR)设备上显示标准动态范围(SDR)视频需要反色调映射(ITM)。然而,这种映射会引入带状伪影。本文提出了一种基于深度卷积神经网络(DCNN)和自适应滤波的反色调映射 HDR 视频带状消除方法。首先提取三个与色带相关的特征图,然后将其馈送给两个 DCNN,一个是形状网络(ShapeNet),另一个是位置网络(PositionNet)。PositionNet 学习软掩码,指出可能发生带状化并需要滤波的位置,而 ShapeNet 则预测适合不同位置的滤波器形状。该方法的优势在于,自适应滤波器可与基于学习的 ITM 算法联合优化,以创建高质量的 HDR 视频。实验结果表明,我们的方法在质量和数量上都优于最先进的算法。
{"title":"Removing Banding Artifacts in HDR Videos Generated From Inverse Tone Mapping","authors":"Fei Zhou;Zikang Zheng;Guoping Qiu","doi":"10.1109/TBC.2024.3394297","DOIUrl":"10.1109/TBC.2024.3394297","url":null,"abstract":"Displaying standard dynamic range (SDR) videos on high dynamic range (HDR) devices requires inverse tone mapping (ITM). However, such mapping can introduce banding artifacts. This paper presents a banding removal method for inversely tone mapped HDR videos based on deep convolutional neural networks (DCNNs) and adaptive filtering. Three banding relevant feature maps are first extracted and then fed to two DCNNs, a ShapeNet and a PositionNet. The PositionNet learns a soft mask indicating the locations where banding is likely to have occurred and filtering is required while the ShapeNet predicts the filter shapes appropriate for different locations. An advantage of the method is that the adaptive filters can be jointly optimized with a learning-based ITM algorithm for creating high-quality HDR videos. Experimental results show that our method outperforms state-of-the-art algorithms qualitatively and quantitatively.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"753-762"},"PeriodicalIF":4.5,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140934722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal OFDM-IM Signals With Constant PAPR 具有恒定 PAPR 的最佳 OFDM-IM 信号
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-10 DOI: 10.1109/TBC.2024.3394292
Jiabo Hu;Yajun Wang;Zhuxian Lian;Yinjie Su;Zhibin Xie
Orthogonal frequency division multiplexing indexed modulation (OFDM-IM), an emerging multi-carrier modulation technique, offers significant advantages over traditional OFDM. The OFDM-IM scheme exhibits superior performance in terms of bit error rate (BER) at low and medium data rates, while also enhancing resilience to inter-carrier interference in dynamically changing channels. However, the challenge of a high peak-to-average ratio (PAPR) also persists in OFDM-IM. In this study, we propose a novel approach to mitigate PAPR by introducing a small dither signal to the idle subcarrier, leveraging the inherent characteristics of OFDM-IM. Subsequently, we address the nonconvex and non-smooth optimization problem of minimizing the maximum amplitude of dither signals while maintaining a constant PAPR constraint. To effectively tackle this challenging optimization task, we adopt the linearized alternating direction multiplier method (LADMM), referred to as the LADMM-direct algorithm, which provides a simple closed-form solution for each subproblem encountered during the optimization process. To improve the convergence rate of the LADMM-direct algorithm, a LADMM-relax algorithm is also proposed to address the PAPR problem. Simulation results demonstrate that our proposed LADMM-direct and LADMM-relax algorithms significantly reduce computational complexity and achieve superior performance in terms of both PAPR and bit error rate (BER) compared to state-of-the-art algorithms.
正交频分复用索引调制(OFDM-IM)是一种新兴的多载波调制技术,与传统的 OFDM 相比具有显著优势。OFDM-IM 方案在中低数据速率下的误码率(BER)方面表现出卓越的性能,同时还增强了对动态变化信道中载波间干扰的抗干扰能力。然而,OFDM-IM 仍然面临峰均比(PAPR)过高的挑战。在本研究中,我们提出了一种新方法,利用 OFDM-IM 的固有特性,通过在空闲子载波上引入小抖动信号来缓解 PAPR。随后,我们解决了一个非凸和非平滑的优化问题,即在保持恒定 PAPR 约束的同时,最小化抖动信号的最大振幅。为有效解决这一具有挑战性的优化任务,我们采用了线性化交替方向乘法器方法(LADMM),即 LADMM-direct算法,该算法可为优化过程中遇到的每个子问题提供简单的闭式解。为了提高 LADMM-direct 算法的收敛速度,还提出了一种 LADMM-relax 算法来解决 PAPR 问题。仿真结果表明,与最先进的算法相比,我们提出的 LADMM-直接算法和 LADMM-relax 算法大大降低了计算复杂度,并在 PAPR 和误码率 (BER) 方面取得了优异的性能。
{"title":"Optimal OFDM-IM Signals With Constant PAPR","authors":"Jiabo Hu;Yajun Wang;Zhuxian Lian;Yinjie Su;Zhibin Xie","doi":"10.1109/TBC.2024.3394292","DOIUrl":"10.1109/TBC.2024.3394292","url":null,"abstract":"Orthogonal frequency division multiplexing indexed modulation (OFDM-IM), an emerging multi-carrier modulation technique, offers significant advantages over traditional OFDM. The OFDM-IM scheme exhibits superior performance in terms of bit error rate (BER) at low and medium data rates, while also enhancing resilience to inter-carrier interference in dynamically changing channels. However, the challenge of a high peak-to-average ratio (PAPR) also persists in OFDM-IM. In this study, we propose a novel approach to mitigate PAPR by introducing a small dither signal to the idle subcarrier, leveraging the inherent characteristics of OFDM-IM. Subsequently, we address the nonconvex and non-smooth optimization problem of minimizing the maximum amplitude of dither signals while maintaining a constant PAPR constraint. To effectively tackle this challenging optimization task, we adopt the linearized alternating direction multiplier method (LADMM), referred to as the LADMM-direct algorithm, which provides a simple closed-form solution for each subproblem encountered during the optimization process. To improve the convergence rate of the LADMM-direct algorithm, a LADMM-relax algorithm is also proposed to address the PAPR problem. Simulation results demonstrate that our proposed LADMM-direct and LADMM-relax algorithms significantly reduce computational complexity and achieve superior performance in terms of both PAPR and bit error rate (BER) compared to state-of-the-art algorithms.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"945-954"},"PeriodicalIF":3.2,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140934688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Broadcasting
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1