首页 > 最新文献

IEEE Transactions on Broadcasting最新文献

英文 中文
IM-Based Pilot-Assisted Channel Estimation for FTN Signaling HF Communications 基于 IM 的先导辅助信道估计,用于 FTN 信号高频通信
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-07 DOI: 10.1109/TBC.2024.3391025
Simin Keykhosravi;Ebrahim Bedeer
This paper investigates doubly-selective (i.e., time- and frequency-selective) channel estimation in faster-than-Nyquist (FTN) signaling HF communications. In particular, we propose a novel IM-based channel estimation algorithm for FTN signaling HF communications including pilot sequence placement (PSP) and pilot sequence location identification (PSLI) algorithms. At the transmitter, we propose the PSP algorithm that utilizes the locations of pilot sequences to carry additional information bits, thereby improving the SE of HF communications. HF channels have two non-zero independent fading paths with specific fixed delay spread and frequency spread characteristics as outlined in the Union Radio communication Sector (ITU-R) F.1487 and F.520. Having said that, based on the aforementioned properties of the HF channels and the favorable auto-correlation characteristics of the optimal pilot sequence, we propose a novel PSLI algorithm that effectively identifies the pilot sequence location within a given frame at the receiver. This is achieved by showing that the square of the absolute value of the cross-correlation between the received symbols and the pilot sequence consists of a scaled version of the square of the absolute value of the auto-correlation of the pilot sequence weighted by the gain of the corresponding HF channel path. Simulation results show very low pilot sequence location identification errors for HF channels. Our simulation results show a 6 dB improvement in the MSE of the channel estimation as well as about 3.5 dB BER improvement of FTN signaling along with an enhancement in SE compared to the method in Ishihara and Sugiura (2017). We also achieved an enhancement in SE compared to the work in Keykhosravi and Bedeer (2023) while maintaining comparable MSE of the channel estimation and BER performance.
本文研究了快于奈奎斯特(FTN)信令高频通信中的双选择(即时间和频率选择)信道估计。特别是,我们为 FTN 信令高频通信提出了一种基于 IM 的新型信道估计算法,包括先导序列放置(PSP)和先导序列位置识别(PSLI)算法。在发射机上,我们提出的 PSP 算法可利用先导序列的位置来携带额外的信息比特,从而提高高频通信的 SE。高频信道有两条非零独立衰减路径,具有特定的固定延迟传播和频率传播特性,这在联盟无线电通信部门(ITU-R)F.1487 和 F.520 中有所概述。基于高频信道的上述特性和最佳先导序列的有利自相关特性,我们提出了一种新颖的 PSLI 算法,该算法能在接收器上有效识别给定帧内的先导序列位置。要做到这一点,需要证明接收符号与先导序列之间交叉相关绝对值的平方由先导序列自相关绝对值平方的缩放版本组成,并由相应高频信道路径的增益加权。仿真结果表明,高频信道的先导序列位置识别误差非常低。我们的仿真结果表明,与 Ishihara 和 Sugiura(2017)的方法相比,信道估计的 MSE 提高了 6 dB,FTN 信令的误码率提高了约 3.5 dB,SE 也有所提高。与 Keykhosravi 和 Bedeer(2023 年)的研究相比,我们还提高了 SE,同时保持了相当的信道估计 MSE 和误码率性能。
{"title":"IM-Based Pilot-Assisted Channel Estimation for FTN Signaling HF Communications","authors":"Simin Keykhosravi;Ebrahim Bedeer","doi":"10.1109/TBC.2024.3391025","DOIUrl":"10.1109/TBC.2024.3391025","url":null,"abstract":"This paper investigates doubly-selective (i.e., time- and frequency-selective) channel estimation in faster-than-Nyquist (FTN) signaling HF communications. In particular, we propose a novel IM-based channel estimation algorithm for FTN signaling HF communications including pilot sequence placement (PSP) and pilot sequence location identification (PSLI) algorithms. At the transmitter, we propose the PSP algorithm that utilizes the locations of pilot sequences to carry additional information bits, thereby improving the SE of HF communications. HF channels have two non-zero independent fading paths with specific fixed delay spread and frequency spread characteristics as outlined in the Union Radio communication Sector (ITU-R) F.1487 and F.520. Having said that, based on the aforementioned properties of the HF channels and the favorable auto-correlation characteristics of the optimal pilot sequence, we propose a novel PSLI algorithm that effectively identifies the pilot sequence location within a given frame at the receiver. This is achieved by showing that the square of the absolute value of the cross-correlation between the received symbols and the pilot sequence consists of a scaled version of the square of the absolute value of the auto-correlation of the pilot sequence weighted by the gain of the corresponding HF channel path. Simulation results show very low pilot sequence location identification errors for HF channels. Our simulation results show a 6 dB improvement in the MSE of the channel estimation as well as about 3.5 dB BER improvement of FTN signaling along with an enhancement in SE compared to the method in Ishihara and Sugiura (2017). We also achieved an enhancement in SE compared to the work in Keykhosravi and Bedeer (2023) while maintaining comparable MSE of the channel estimation and BER performance.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"774-791"},"PeriodicalIF":3.2,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Proposal to Use ROUTE/DASH in the Advanced ISDB-T 在高级 ISDB-T 中使用 ROUTE/DASH 的建议
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-03 DOI: 10.1109/TBC.2024.3402380
George Henrique Maranhão Garcia de Oliveira;Gustavo de Melo Valeira;Cristiano Akamine
In Brazil, the Television (TV) 3.0 project has been underway since 2020 and is currently in its third phase. The aim of this project is to study, test and validate state-of-the-art technologies in order to define the techniques that will make up the next-generation of Brazilian Digital Terrestrial Television Broadcasting (DTTB) System. All the technologies involved in this system must be compatible with the transportation method defined in Phase 02 of the project: the Real-Time Object Delivery over Unidirectional Transport (ROUTE)/Dynamic Adaptive Streaming over HTTP (DASH) method from the Advanced Television Systems Committee (ATSC) 3.0 standard. Therefore, this paper proposes the use of the ROUTE/DASH transportation method in the Advanced Integrated Services Digital Broadcasting Terrestrial (ISDB-T) system, presenting the theory involved and the results obtained in the first transmission carried out involving the two aforementioned technologies.
在巴西,电视(TV)3.0 项目自 2020 年开始实施,目前已进入第三阶段。该项目的目的是研究、测试和验证最先进的技术,以确定构成下一代巴西地面数字电视广播(DTTB)系统的技术。该系统涉及的所有技术必须与项目第 02 阶段定义的传输方法兼容:高级电视系统委员会(ATSC)3.0 标准中的实时对象单向传输(ROUTE)/HTTP 动态自适应流(DASH)方法。因此,本文建议在高级综合服务地面数字广播(ISDB-T)系统中使用 ROUTE/DASH 传输方法,并介绍了相关理论以及首次使用上述两种技术进行传输所取得的结果。
{"title":"A Proposal to Use ROUTE/DASH in the Advanced ISDB-T","authors":"George Henrique Maranhão Garcia de Oliveira;Gustavo de Melo Valeira;Cristiano Akamine","doi":"10.1109/TBC.2024.3402380","DOIUrl":"10.1109/TBC.2024.3402380","url":null,"abstract":"In Brazil, the Television (TV) 3.0 project has been underway since 2020 and is currently in its third phase. The aim of this project is to study, test and validate state-of-the-art technologies in order to define the techniques that will make up the next-generation of Brazilian Digital Terrestrial Television Broadcasting (DTTB) System. All the technologies involved in this system must be compatible with the transportation method defined in Phase 02 of the project: the Real-Time Object Delivery over Unidirectional Transport (ROUTE)/Dynamic Adaptive Streaming over HTTP (DASH) method from the Advanced Television Systems Committee (ATSC) 3.0 standard. Therefore, this paper proposes the use of the ROUTE/DASH transportation method in the Advanced Integrated Services Digital Broadcasting Terrestrial (ISDB-T) system, presenting the theory involved and the results obtained in the first transmission carried out involving the two aforementioned technologies.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"935-944"},"PeriodicalIF":3.2,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Complexity Limits: Machine Learning for Sidelink-Assisted mmWave Multicasting in 6G 超越复杂性限制:6G 中侧向链路辅助毫米波多播的机器学习
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-30 DOI: 10.1109/TBC.2024.3382959
Nadezhda Chukhno;Olga Chukhno;Sara Pizzi;Antonella Molinaro;Antonio Iera;Giuseppe Araniti
The latest technological developments have fueled revolutionary changes and improvements in wireless communication systems. Among them, mmWave spectrum exploitation stands out for its ability to deliver ultra-high data rates. However, its full adoption beyond fifth generation multicast systems (5G+/6G) remains hampered, mainly due to mobility robustness issues. In this work, we propose a solution to address the problem of efficient sidelink-assisted multicasting in mobile multimode systems, specifically by considering the possibility of jointly utilizing sidelink/device-to-device (D2D), unicast, and multicast transmissions to improve service delivery. To overcome the complexity problem in finding the optimal solution for user-mode binding, we introduce a pre-optimization step called multicast group formation (MGF). Through a clustering technique based on unsupervised machine learning, MGF allows to reduce the complexity of solving the sidelink-assisted multiple modes mmWave (SA3M) problem. A detailed analysis of the impact of various system parameters on performance is conducted, and numerical evidence of the complexity/performance trade-off and its dependence on mobility patterns and user distribution is provided. Particularly, our proposed solution achieves a network throughput improvement of up to 32% over state-of-the-art schemes while ensuring the lowest computational time. Finally, the results demonstrate that an effective balance between power consumption and latency can be achieved through appropriate adjustments of transmit power and bandwidth.
最新的技术发展推动了无线通信系统的革命性变革和改进。其中,毫米波频谱利用因其提供超高数据速率的能力而脱颖而出。然而,其在第五代组播系统(5G+/6G)之外的全面应用仍然受阻,主要原因是移动性鲁棒性问题。在这项工作中,我们提出了一种解决方案来解决移动多模系统中的高效侧链辅助组播问题,特别是考虑了联合利用侧链/设备到设备(D2D)、单播和组播传输来改善服务交付的可能性。为了克服寻找用户模式绑定最佳解决方案的复杂性问题,我们引入了一个称为多播组形成(MGF)的预优化步骤。通过基于无监督机器学习的聚类技术,MGF 可以降低解决侧线辅助多模式毫米波(SA3M)问题的复杂性。我们详细分析了各种系统参数对性能的影响,并提供了复杂性/性能权衡的数值证据及其对移动模式和用户分布的依赖性。特别是,与最先进的方案相比,我们提出的解决方案在确保最短计算时间的同时,还能将网络吞吐量提高 32%。最后,研究结果表明,通过适当调整发射功率和带宽,可以实现功耗和延迟之间的有效平衡。
{"title":"Beyond Complexity Limits: Machine Learning for Sidelink-Assisted mmWave Multicasting in 6G","authors":"Nadezhda Chukhno;Olga Chukhno;Sara Pizzi;Antonella Molinaro;Antonio Iera;Giuseppe Araniti","doi":"10.1109/TBC.2024.3382959","DOIUrl":"10.1109/TBC.2024.3382959","url":null,"abstract":"The latest technological developments have fueled revolutionary changes and improvements in wireless communication systems. Among them, mmWave spectrum exploitation stands out for its ability to deliver ultra-high data rates. However, its full adoption beyond fifth generation multicast systems (5G+/6G) remains hampered, mainly due to mobility robustness issues. In this work, we propose a solution to address the problem of efficient sidelink-assisted multicasting in mobile multimode systems, specifically by considering the possibility of jointly utilizing sidelink/device-to-device (D2D), unicast, and multicast transmissions to improve service delivery. To overcome the complexity problem in finding the optimal solution for user-mode binding, we introduce a pre-optimization step called multicast group formation (MGF). Through a clustering technique based on unsupervised machine learning, MGF allows to reduce the complexity of solving the sidelink-assisted multiple modes mmWave (SA3M) problem. A detailed analysis of the impact of various system parameters on performance is conducted, and numerical evidence of the complexity/performance trade-off and its dependence on mobility patterns and user distribution is provided. Particularly, our proposed solution achieves a network throughput improvement of up to 32% over state-of-the-art schemes while ensuring the lowest computational time. Finally, the results demonstrate that an effective balance between power consumption and latency can be achieved through appropriate adjustments of transmit power and bandwidth.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"1076-1090"},"PeriodicalIF":3.2,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10513425","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140839613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic and Super-Personalized Media Ecosystem Driven by Generative AI: Unpredictable Plays Never Repeating the Same 由生成式人工智能驱动的动态和超级个性化媒体生态系统:不可预测的剧目永不重演
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-22 DOI: 10.1109/TBC.2024.3380474
Sungjun Ahn;Hyun-Jeong Yim;Youngwan Lee;Sung-Ik Park
This paper introduces a media service model that exploits artificial intelligence (AI) video generators at the receive end. This proposal deviates from the traditional multimedia ecosystem, completely relying on in-house production, by shifting part of the content creation onto the receiver. We bring a semantic process into the framework, allowing the distribution network to provide service elements that prompt the content generator rather than distributing encoded data of fully finished programs. The service elements include fine-tailored text descriptions, lightweight image data of some objects, or application programming interfaces, comprehensively referred to as semantic sources, and the user terminal translates the received semantic data into video frames. Empowered by the random nature of generative AI, users can experience super-personalized services accordingly. The proposed idea incorporates situations in which the user receives different service providers’ element packages, either in a sequence over time or multiple packages at the same time. Given promised in-context coherence and content integrity, the combinatory dynamics will amplify the service diversity, allowing the users to always chance upon new experiences. This work particularly aims at short-form videos and advertisements, which the users would easily feel fatigued by seeing the same frame sequence every time. In those use cases, the content provider’s role will be recast as scripting semantic sources, transformed from a thorough producer. Overall, this work explores a new form of media ecosystem facilitated by receiver-embedded generative models, featuring both random content dynamics and enhanced delivery efficiency simultaneously.
本文介绍了一种在接收端利用人工智能(AI)视频生成器的媒体服务模式。这一建议偏离了完全依赖内部制作的传统多媒体生态系统,将部分内容创作转移到了接收端。我们将语义流程引入框架,允许分发网络提供服务元素,提示内容生成器,而不是分发已完全完成节目的编码数据。服务元素包括量身定制的文本描述、某些对象的轻量级图像数据或应用程序编程接口,统称为语义源,用户终端将接收到的语义数据转化为视频帧。借助生成式人工智能的随机性,用户可以体验到超级个性化的服务。所提出的想法包含了用户接收不同服务提供商元素包的情况,这些元素包可以是一段时间内的序列包,也可以是同时接收的多个包。在保证上下文一致性和内容完整性的前提下,组合动态将扩大服务的多样性,让用户始终有机会获得新体验。这项工作尤其针对短视频和广告,因为用户很容易因为每次都看到相同的帧序列而感到疲劳。在这些使用案例中,内容提供商的角色将从彻底的制作者转变为脚本语义源。总之,这项工作探索了一种由嵌入式接收器生成模型促进的新型媒体生态系统,其特点是同时具有随机内容动态和更高的传输效率。
{"title":"Dynamic and Super-Personalized Media Ecosystem Driven by Generative AI: Unpredictable Plays Never Repeating the Same","authors":"Sungjun Ahn;Hyun-Jeong Yim;Youngwan Lee;Sung-Ik Park","doi":"10.1109/TBC.2024.3380474","DOIUrl":"10.1109/TBC.2024.3380474","url":null,"abstract":"This paper introduces a media service model that exploits artificial intelligence (AI) video generators at the receive end. This proposal deviates from the traditional multimedia ecosystem, completely relying on in-house production, by shifting part of the content creation onto the receiver. We bring a semantic process into the framework, allowing the distribution network to provide service elements that prompt the content generator rather than distributing encoded data of fully finished programs. The service elements include fine-tailored text descriptions, lightweight image data of some objects, or application programming interfaces, comprehensively referred to as semantic sources, and the user terminal translates the received semantic data into video frames. Empowered by the random nature of generative AI, users can experience super-personalized services accordingly. The proposed idea incorporates situations in which the user receives different service providers’ element packages, either in a sequence over time or multiple packages at the same time. Given promised in-context coherence and content integrity, the combinatory dynamics will amplify the service diversity, allowing the users to always chance upon new experiences. This work particularly aims at short-form videos and advertisements, which the users would easily feel fatigued by seeing the same frame sequence every time. In those use cases, the content provider’s role will be recast as scripting semantic sources, transformed from a thorough producer. Overall, this work explores a new form of media ecosystem facilitated by receiver-embedded generative models, featuring both random content dynamics and enhanced delivery efficiency simultaneously.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"980-994"},"PeriodicalIF":3.2,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140637223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Pretraining for Stereoscopic Image Super-Resolution With Parallax-Aware Masking 利用视差感知遮蔽进行立体图像超分辨率的自监督预训练
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-22 DOI: 10.1109/TBC.2024.3382960
Zhe Zhang;Jianjun Lei;Bo Peng;Jie Zhu;Qingming Huang
Most existing learning-based methods for stereoscopic image super-resolution rely on a great number of high-resolution stereoscopic images as labels. To alleviate the problem of data dependency, this paper proposes a self-supervised pretraining-based method for stereoscopic image super-resolution (SelfSSR). Specifically, to develop a self-supervised pretext task for stereoscopic images, a parallax-aware masking strategy (PAMS) is designed to adaptively mask matching areas of the left and right views. With PAMS, the network is encouraged to effectively predict missing information of input images. Besides, a cross-view Transformer module (CVTM) is presented to aggregate the intra-view and inter-view information simultaneously for stereoscopic image reconstruction. Meanwhile, the cross-attention map learned by CVTM is utilized to guide the masking process in PAMS. Comparative results on four datasets show that the proposed SelfSSR achieves state-of-the-art performance by using only 10% of labeled training data.
现有的基于学习的立体图像超分辨率方法大多依赖于大量的高分辨率立体图像作为标签。为了缓解数据依赖问题,本文提出了一种基于自监督预训练的立体图像超分辨率方法(SelfSSR)。具体来说,为了开发立体图像的自监督预训练任务,设计了一种视差感知遮蔽策略(PAMS)来自适应地遮蔽左右视图的匹配区域。有了 PAMS,网络就能有效预测输入图像的缺失信息。此外,还提出了跨视图变换器模块(CVTM),可同时聚合视图内和视图间的信息,用于立体图像重建。同时,在 PAMS 中利用 CVTM 学习到的交叉注意图来指导遮蔽过程。在四个数据集上的比较结果表明,所提出的 SelfSSR 只使用了 10% 的标注训练数据,就达到了最先进的性能。
{"title":"Self-Supervised Pretraining for Stereoscopic Image Super-Resolution With Parallax-Aware Masking","authors":"Zhe Zhang;Jianjun Lei;Bo Peng;Jie Zhu;Qingming Huang","doi":"10.1109/TBC.2024.3382960","DOIUrl":"10.1109/TBC.2024.3382960","url":null,"abstract":"Most existing learning-based methods for stereoscopic image super-resolution rely on a great number of high-resolution stereoscopic images as labels. To alleviate the problem of data dependency, this paper proposes a self-supervised pretraining-based method for stereoscopic image super-resolution (SelfSSR). Specifically, to develop a self-supervised pretext task for stereoscopic images, a parallax-aware masking strategy (PAMS) is designed to adaptively mask matching areas of the left and right views. With PAMS, the network is encouraged to effectively predict missing information of input images. Besides, a cross-view Transformer module (CVTM) is presented to aggregate the intra-view and inter-view information simultaneously for stereoscopic image reconstruction. Meanwhile, the cross-attention map learned by CVTM is utilized to guide the masking process in PAMS. Comparative results on four datasets show that the proposed SelfSSR achieves state-of-the-art performance by using only 10% of labeled training data.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"482-491"},"PeriodicalIF":4.5,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140637362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Latency VR Video Processing-Transmitting System Based on Edge Computing 基于边缘计算的低延迟 VR 视频处理与传输系统
IF 3.2 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-11 DOI: 10.1109/TBC.2024.3380455
Nianzhen Gao;Jiaxi Zhou;Guoan Wan;Xinhai Hua;Ting Bi;Tao Jiang
The widespread use of live streaming necessitates low-latency requirements for the processing and transmission of virtual reality (VR) videos. This paper introduces a prototype system for low-latency VR video processing and transmission that exploits edge computing to harness the computational power of edge servers. This approach enables efficient video preprocessing and facilitates closer-to-user multicast video distribution. Despite edge computing’s potential, managing large-scale access, addressing differentiated channel conditions, and accommodating diverse user viewports pose significant challenges for VR video transcoding and scheduling. To tackle these challenges, our system utilizes dual-edge servers for video transcoding and slicing, thereby markedly improving the viewing experience compared to traditional cloud-based systems. Additionally, we devise a low-complexity greedy algorithm for multi-edge and multi-user VR video offloading distribution, employing the results of bitrate decisions to guide video transcoding inversely. Simulation results reveal that our strategy significantly enhances system utility by 44.77% over existing state-of-the-art schemes that do not utilize edge servers while reducing processing time by 58.54%.
随着实时流媒体的广泛应用,虚拟现实(VR)视频的处理和传输必须满足低延迟要求。本文介绍了一种用于低延迟虚拟现实视频处理和传输的原型系统,该系统利用边缘计算来发挥边缘服务器的计算能力。这种方法可实现高效的视频预处理,并促进更接近用户的组播视频分发。尽管边缘计算潜力巨大,但管理大规模接入、处理不同的信道条件以及适应不同的用户视口都给 VR 视频转码和调度带来了巨大挑战。为了应对这些挑战,我们的系统利用双边缘服务器进行视频转码和切片,从而与传统的云系统相比显著改善了观看体验。此外,我们还为多边缘和多用户 VR 视频卸载分配设计了一种低复杂度的贪婪算法,利用比特率决策结果反向指导视频转码。仿真结果表明,与不利用边缘服务器的现有先进方案相比,我们的策略显著提高了 44.77% 的系统效用,同时减少了 58.54% 的处理时间。
{"title":"Low-Latency VR Video Processing-Transmitting System Based on Edge Computing","authors":"Nianzhen Gao;Jiaxi Zhou;Guoan Wan;Xinhai Hua;Ting Bi;Tao Jiang","doi":"10.1109/TBC.2024.3380455","DOIUrl":"10.1109/TBC.2024.3380455","url":null,"abstract":"The widespread use of live streaming necessitates low-latency requirements for the processing and transmission of virtual reality (VR) videos. This paper introduces a prototype system for low-latency VR video processing and transmission that exploits edge computing to harness the computational power of edge servers. This approach enables efficient video preprocessing and facilitates closer-to-user multicast video distribution. Despite edge computing’s potential, managing large-scale access, addressing differentiated channel conditions, and accommodating diverse user viewports pose significant challenges for VR video transcoding and scheduling. To tackle these challenges, our system utilizes dual-edge servers for video transcoding and slicing, thereby markedly improving the viewing experience compared to traditional cloud-based systems. Additionally, we devise a low-complexity greedy algorithm for multi-edge and multi-user VR video offloading distribution, employing the results of bitrate decisions to guide video transcoding inversely. Simulation results reveal that our strategy significantly enhances system utility by 44.77% over existing state-of-the-art schemes that do not utilize edge servers while reducing processing time by 58.54%.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"862-871"},"PeriodicalIF":3.2,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140578571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Database and Model for the Visual Quality Assessment of Super-Resolution Videos 超分辨率视频视觉质量评估数据库和模型
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-11 DOI: 10.1109/TBC.2024.3382949
Fei Zhou;Wei Sheng;Zitao Lu;Guoping Qiu
Video super-resolution (SR) has important real world applications such as enhancing viewing experiences of legacy low-resolution videos on high resolution display devices. However, there are no visual quality assessment (VQA) models specifically designed for evaluating SR videos while such models are crucially important both for advancing video SR algorithms and for viewing quality assurance. This paper addresses this gap. We start by contributing the first video super-resolution quality assessment database (VSR-QAD) which contains 2,260 SR videos annotated with mean opinion score (MOS) labels collected through an approximately 400 man-hours psychovisual experiment by a total of 190 subjects. We then build on the new VSR-QAD and develop the first VQA model specifically designed for evaluating SR videos. The model features a two-stream convolutional neural network architecture and a two-stage training algorithm designed for extracting spatial and temporal features characterizing the quality of SR videos. We present experimental results and data analysis to demonstrate the high data quality of VSR-QAD and the effectiveness of the new VQA model for measuring the visual quality of SR videos. The new database and the code of the proposed model will be available online at https://github.com/key1cdc/VSRQAD.
视频超分辨率(SR)在现实世界中有着重要的应用,例如在高分辨率显示设备上增强传统低分辨率视频的观看体验。然而,目前还没有专门用于评估 SR 视频的视觉质量评估 (VQA) 模型,而这类模型对于推进视频 SR 算法和保证观看质量都至关重要。本文正是为了弥补这一空白。首先,我们提供了第一个视频超分辨率质量评估数据库(VSR-QAD),该数据库包含 2,260 个 SR 视频,这些视频标注了平均意见分(MOS)标签,这些标签是由 190 名受试者通过约 400 个工时的心理视觉实验收集的。然后,我们以新的 VSR-QAD 为基础,开发了首个专门用于评估 SR 视频的 VQA 模型。该模型采用双流卷积神经网络架构和两阶段训练算法,旨在提取表征 SR 视频质量的空间和时间特征。我们展示了实验结果和数据分析,以证明 VSR-QAD 的高数据质量和新 VQA 模型在测量 SR 视频视觉质量方面的有效性。新数据库和拟议模型的代码将在 https://github.com/key1cdc/VSRQAD 上在线提供。
{"title":"A Database and Model for the Visual Quality Assessment of Super-Resolution Videos","authors":"Fei Zhou;Wei Sheng;Zitao Lu;Guoping Qiu","doi":"10.1109/TBC.2024.3382949","DOIUrl":"10.1109/TBC.2024.3382949","url":null,"abstract":"Video super-resolution (SR) has important real world applications such as enhancing viewing experiences of legacy low-resolution videos on high resolution display devices. However, there are no visual quality assessment (VQA) models specifically designed for evaluating SR videos while such models are crucially important both for advancing video SR algorithms and for viewing quality assurance. This paper addresses this gap. We start by contributing the first video super-resolution quality assessment database (VSR-QAD) which contains 2,260 SR videos annotated with mean opinion score (MOS) labels collected through an approximately 400 man-hours psychovisual experiment by a total of 190 subjects. We then build on the new VSR-QAD and develop the first VQA model specifically designed for evaluating SR videos. The model features a two-stream convolutional neural network architecture and a two-stage training algorithm designed for extracting spatial and temporal features characterizing the quality of SR videos. We present experimental results and data analysis to demonstrate the high data quality of VSR-QAD and the effectiveness of the new VQA model for measuring the visual quality of SR videos. The new database and the code of the proposed model will be available online at \u0000<uri>https://github.com/key1cdc/VSRQAD</uri>\u0000.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"516-532"},"PeriodicalIF":4.5,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140578579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stable Viewport-Based Unsupervised Compressed 360° Video Quality Enhancement 基于稳定视口的无监督压缩 360$^{circ}$ 视频质量增强技术
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-10 DOI: 10.1109/TBC.2024.3380435
Zizhuang Zou;Mao Ye;Xue Li;Luping Ji;Ce Zhu
With the popularity of panoramic cameras and head mount displays, many 360° videos have been recorded. Due to the geometric distortion and boundary discontinuity of 2D projection of 360° video, traditional 2D lossy video compression technology always generates more artifacts. Therefore, it is necessary to enhance the quality of compressed 360° video. However, 360° video characteristics make traditional 2D enhancement models cannot work properly. So the previous work tries to obtain the viewport sequence with smaller geometric distortions for enhancement. But such sequence is difficult to be obtained and the trained enhancement model cannot be well adapted to a new dataset. To address these issues, we propose a Stable viewport-based Unsupervised compressed 360° video Quality Enhancement (SUQE) method. Our method consists of two stages. In the first stage, a new data preparation module is proposed which adopts saliency-based data augmentation and viewport cropping techniques to generate training dataset. A standard 2D enhancement model is trained based on this dataset. For transferring the trained enhancement model to the target dataset, a shift prediction module is designed, which will crop a shifted viewport clip as supervision signal for model adaptation. For the second stage, by comparing the differences between the current enhanced original and shifted frames, the Mean Teacher framework is employed to further fine-tune the enhancement model. Experiment results confirm that our method achieves satisfactory performance on the public dataset. The relevant models and code will be released.
随着全景相机和头戴式显示器的普及,人们录制了许多 360° 视频。由于 360° 视频的二维投影存在几何失真和边界不连续性,传统的二维有损视频压缩技术总是会产生较多的伪影。因此,有必要提高 360° 视频压缩的质量。然而,360° 视频的特性使得传统的 2D 增强模型无法正常工作。因此,之前的工作试图获取几何失真较小的视口序列来进行增强。但这种序列很难获得,而且训练好的增强模型也不能很好地适应新的数据集。为了解决这些问题,我们提出了一种基于稳定视口的无监督压缩 360° 视频质量增强(SUQE)方法。我们的方法包括两个阶段。在第一阶段,我们提出了一个新的数据准备模块,它采用基于显著性的数据增强和视口裁剪技术来生成训练数据集。在此数据集的基础上训练标准的二维增强模型。为了将训练好的增强模型转移到目标数据集,设计了一个移位预测模块,它将裁剪一个移位的视口剪辑作为模型适应的监督信号。在第二阶段,通过比较当前增强的原始帧和移位帧之间的差异,采用平均教师框架来进一步微调增强模型。实验结果证实,我们的方法在公共数据集上取得了令人满意的性能。相关模型和代码即将发布。
{"title":"Stable Viewport-Based Unsupervised Compressed 360° Video Quality Enhancement","authors":"Zizhuang Zou;Mao Ye;Xue Li;Luping Ji;Ce Zhu","doi":"10.1109/TBC.2024.3380435","DOIUrl":"10.1109/TBC.2024.3380435","url":null,"abstract":"With the popularity of panoramic cameras and head mount displays, many 360° videos have been recorded. Due to the geometric distortion and boundary discontinuity of 2D projection of 360° video, traditional 2D lossy video compression technology always generates more artifacts. Therefore, it is necessary to enhance the quality of compressed 360° video. However, 360° video characteristics make traditional 2D enhancement models cannot work properly. So the previous work tries to obtain the viewport sequence with smaller geometric distortions for enhancement. But such sequence is difficult to be obtained and the trained enhancement model cannot be well adapted to a new dataset. To address these issues, we propose a Stable viewport-based Unsupervised compressed 360° video Quality Enhancement (SUQE) method. Our method consists of two stages. In the first stage, a new data preparation module is proposed which adopts saliency-based data augmentation and viewport cropping techniques to generate training dataset. A standard 2D enhancement model is trained based on this dataset. For transferring the trained enhancement model to the target dataset, a shift prediction module is designed, which will crop a shifted viewport clip as supervision signal for model adaptation. For the second stage, by comparing the differences between the current enhanced original and shifted frames, the Mean Teacher framework is employed to further fine-tune the enhancement model. Experiment results confirm that our method achieves satisfactory performance on the public dataset. The relevant models and code will be released.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"607-619"},"PeriodicalIF":4.5,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140578576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth Video Inter Coding Based on Deep Frame Generation 基于深度帧生成的深度视频交互编码
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-01 DOI: 10.1109/TBC.2024.3374103
Ge Li;Jianjun Lei;Zhaoqing Pan;Bo Peng;Nam Ling
Due to the fact that depth video contains large similar smooth content, the depth video frame could be selectively generated at the decoder side without being encoded and transmitted at the encoder side, so as to achieve a significant improvement in coding efficiency. This paper proposes a deep frame generation-based depth video inter coding method to efficiently compress the depth video. To reduce temporal redundancies of the depth video, the proposed method encodes depth key frames and directly generates the reconstruction of depth non-key frames. Moreover, a warping-based frame generation network with boundary awareness (Ba-WFGNet) is designed to generate high-quality depth non-key frames at the decoder side. In the Ba-WFGNet, the temporal correlations among depth frames are utilized to generate the coarse depth non-key frame in a warping manner. Then, considering the boundary quality of depth video has an important impact on view synthesis, a boundary-aware refinement module is designed to further refine the coarse depth non-key frame for high-quality boundaries. The proposed method is implemented into MIV, and experimental results verify that the proposed method achieves superior coding efficiency.
由于深度视频包含大量相似的平滑内容,可以在解码器端选择性地生成深度视频帧,而无需在编码器端进行编码和传输,从而显著提高编码效率。本文提出了一种基于深度帧生成的深度视频交互编码方法,以高效压缩深度视频。为减少深度视频的时序冗余,该方法对深度关键帧进行编码,并直接生成重建深度非关键帧。此外,还设计了一个基于翘曲的边界感知帧生成网络(Ba-WFGNet),用于在解码器端生成高质量的深度非关键帧。在 Ba-WFGNet 中,深度帧之间的时间相关性被用来以翘曲方式生成粗深度非关键帧。然后,考虑到深度视频的边界质量对视图合成有重要影响,设计了一个边界感知细化模块,以进一步细化粗深度非关键帧的高质量边界。将所提出的方法应用到 MIV 中,实验结果验证了所提出的方法具有更高的编码效率。
{"title":"Depth Video Inter Coding Based on Deep Frame Generation","authors":"Ge Li;Jianjun Lei;Zhaoqing Pan;Bo Peng;Nam Ling","doi":"10.1109/TBC.2024.3374103","DOIUrl":"10.1109/TBC.2024.3374103","url":null,"abstract":"Due to the fact that depth video contains large similar smooth content, the depth video frame could be selectively generated at the decoder side without being encoded and transmitted at the encoder side, so as to achieve a significant improvement in coding efficiency. This paper proposes a deep frame generation-based depth video inter coding method to efficiently compress the depth video. To reduce temporal redundancies of the depth video, the proposed method encodes depth key frames and directly generates the reconstruction of depth non-key frames. Moreover, a warping-based frame generation network with boundary awareness (Ba-WFGNet) is designed to generate high-quality depth non-key frames at the decoder side. In the Ba-WFGNet, the temporal correlations among depth frames are utilized to generate the coarse depth non-key frame in a warping manner. Then, considering the boundary quality of depth video has an important impact on view synthesis, a boundary-aware refinement module is designed to further refine the coarse depth non-key frame for high-quality boundaries. The proposed method is implemented into MIV, and experimental results verify that the proposed method achieves superior coding efficiency.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"708-718"},"PeriodicalIF":4.5,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140578578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subjective and Objective Quality Assessment of Multi-Attribute Retouched Face Images 多属性修饰人脸图像的主观和客观质量评估
IF 4.5 1区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-27 DOI: 10.1109/TBC.2024.3374043
Guanghui Yue;Honglv Wu;Weiqing Yan;Tianwei Zhou;Hantao Liu;Wei Zhou
Facial retouching, aiming at enhancing an individual’s appearance digitally, has become popular in many parts of human life, such as personal entertainment, commercial advertising, etc. However, excessive use of facial retouching can affect public aesthetic values and accordingly induce issues of mental health. There is a growing need for comprehensive quality assessment of Retouched Face (RF) images. This paper aims to advance this topic from both subjective and objective studies. Firstly, we generate 2,500 RF images by retouching 250 high-quality face images from multiple attributes (i.e., eyes, nose, mouth, and facial shape) with different photo-editing tools. After that, we carry out a series of subjective experiments to evaluate the quality of multi-attribute RF images from various perspectives, and construct the Multi-Attribute Retouched Face Database (MARFD) with multi-labels. Secondly, considering that retouching alters the facial morphology, we introduce a multi-task learning based No-Reference (NR) Image Quality Assessment (IQA) method, named MTNet. Specifically, to capture high-level semantic information associated with geometric changes, MTNet treats the alteration degree estimation of retouching attributes as auxiliary tasks for the main task (i.e., the overall quality prediction). In addition, inspired by the perceptual effects of viewing distance, MTNet utilizes a multi-scale data augmentation strategy during network training to help the network better understand the distortions. Experimental results on MARFD show that our MTNet correlates well with subjective ratings and outperforms 16 state-of-the-art NR-IQA methods.
以数字方式提升个人外貌为目的的面部修饰已在人类生活的许多领域流行开来,如个人娱乐、商业广告等。然而,过度使用面部修饰会影响公众的审美价值观,并相应地诱发心理健康问题。对修饰脸部(RF)图像进行全面质量评估的需求日益增长。本文旨在从主观和客观研究两方面推进这一课题。首先,我们使用不同的照片编辑工具,从多个属性(即眼睛、鼻子、嘴巴和脸型)对 250 张高质量的人脸图像进行修饰,生成 2,500 张 RF 图像。之后,我们进行了一系列主观实验,从不同角度评估多属性 RF 图像的质量,并构建了多标签的多属性修饰人脸数据库(MARFD)。其次,考虑到修饰会改变面部形态,我们引入了一种基于多任务学习的无参考(NR)图像质量评估(IQA)方法,命名为 MTNet。具体来说,为了捕捉与几何变化相关的高级语义信息,MTNet 将修饰属性的改变程度估计视为主要任务(即整体质量预测)的辅助任务。此外,受观看距离的感知效果启发,MTNet 在网络训练过程中采用了多尺度数据增强策略,以帮助网络更好地理解失真。在 MARFD 上的实验结果表明,我们的 MTNet 与主观评分有很好的相关性,并且优于 16 种最先进的 NR-IQA 方法。
{"title":"Subjective and Objective Quality Assessment of Multi-Attribute Retouched Face Images","authors":"Guanghui Yue;Honglv Wu;Weiqing Yan;Tianwei Zhou;Hantao Liu;Wei Zhou","doi":"10.1109/TBC.2024.3374043","DOIUrl":"10.1109/TBC.2024.3374043","url":null,"abstract":"Facial retouching, aiming at enhancing an individual’s appearance digitally, has become popular in many parts of human life, such as personal entertainment, commercial advertising, etc. However, excessive use of facial retouching can affect public aesthetic values and accordingly induce issues of mental health. There is a growing need for comprehensive quality assessment of Retouched Face (RF) images. This paper aims to advance this topic from both subjective and objective studies. Firstly, we generate 2,500 RF images by retouching 250 high-quality face images from multiple attributes (i.e., eyes, nose, mouth, and facial shape) with different photo-editing tools. After that, we carry out a series of subjective experiments to evaluate the quality of multi-attribute RF images from various perspectives, and construct the Multi-Attribute Retouched Face Database (MARFD) with multi-labels. Secondly, considering that retouching alters the facial morphology, we introduce a multi-task learning based No-Reference (NR) Image Quality Assessment (IQA) method, named MTNet. Specifically, to capture high-level semantic information associated with geometric changes, MTNet treats the alteration degree estimation of retouching attributes as auxiliary tasks for the main task (i.e., the overall quality prediction). In addition, inspired by the perceptual effects of viewing distance, MTNet utilizes a multi-scale data augmentation strategy during network training to help the network better understand the distortions. Experimental results on MARFD show that our MTNet correlates well with subjective ratings and outperforms 16 state-of-the-art NR-IQA methods.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 2","pages":"570-583"},"PeriodicalIF":4.5,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Broadcasting
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1