首页 > 最新文献

IEEE Transactions on Circuits and Systems for Video Technology最新文献

英文 中文
Viewport Prediction for Volumetric Video Streaming by Exploring Video Saliency and User Trajectory Information 基于视频显著性和用户轨迹信息的容量视频流视口预测
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-11 DOI: 10.1109/TCSVT.2025.3577724
Jie Li;Zhixin Li;Zhi Liu;Peng Yuan Zhou;Richang Hong;Qiyue Li;Han Hu
Volumetric video, also referred to as hologram video, is an emerging medium that represents 3D content in extended reality. As a next-generation video technology, it is poised to become a key application in 5G and future wireless communication networks. Because each user generally views only a specific portion of the volumetric video, known as the viewport, accurate prediction of the viewport is crucial for ensuring an optimal streaming performance. Despite its significance, research in this area is still in the early stages. To this end, this paper introduces a novel approach called Saliency and Trajectory-based Viewport Prediction (STVP), which enhances the accuracy of viewport prediction in volumetric video streaming by effectively leveraging both video saliency and viewport trajectory information. In particular, we first introduce a novel sampling method, Uniform Random Sampling (URS), which efficiently preserves video features while minimizing computational complexity. Next, we propose a saliency detection technique that integrates both spatial and temporal information to identify visually static and dynamic geometric and luminance-salient regions. Finally, we fuse saliency and trajectory information to achieve more accurate viewport prediction. Extensive experimental results validate the superiority of our method over existing state-of-the-art schemes. To the best of our knowledge, this is the first comprehensive study of viewport prediction in volumetric video streaming. We also make the source code of this work publicly available.
体积视频,也称为全息视频,是一种在扩展现实中表现3D内容的新兴媒体。作为下一代视频技术,它将成为5G和未来无线通信网络的关键应用。由于每个用户通常只观看体积视频的特定部分,称为视口,因此准确预测视口对于确保最佳流媒体性能至关重要。尽管意义重大,但这一领域的研究仍处于早期阶段。为此,本文引入了一种基于显著性和轨迹的视口预测(STVP)方法,该方法通过有效地利用视频显著性和视口轨迹信息,提高了体积视频流中视口预测的准确性。特别是,我们首先介绍了一种新的采样方法,均匀随机采样(URS),它有效地保留了视频特征,同时最小化了计算复杂度。接下来,我们提出了一种整合空间和时间信息的显著性检测技术,以识别视觉上静态和动态的几何和亮度显著区域。最后,我们融合显著性和轨迹信息,以实现更准确的视口预测。大量的实验结果验证了我们的方法优于现有的最先进的方案。据我们所知,这是第一次全面研究视频流中的视口预测。我们还公开了这项工作的源代码。
{"title":"Viewport Prediction for Volumetric Video Streaming by Exploring Video Saliency and User Trajectory Information","authors":"Jie Li;Zhixin Li;Zhi Liu;Peng Yuan Zhou;Richang Hong;Qiyue Li;Han Hu","doi":"10.1109/TCSVT.2025.3577724","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3577724","url":null,"abstract":"Volumetric video, also referred to as hologram video, is an emerging medium that represents 3D content in extended reality. As a next-generation video technology, it is poised to become a key application in 5G and future wireless communication networks. Because each user generally views only a specific portion of the volumetric video, known as the viewport, accurate prediction of the viewport is crucial for ensuring an optimal streaming performance. Despite its significance, research in this area is still in the early stages. To this end, this paper introduces a novel approach called Saliency and Trajectory-based Viewport Prediction (STVP), which enhances the accuracy of viewport prediction in volumetric video streaming by effectively leveraging both video saliency and viewport trajectory information. In particular, we first introduce a novel sampling method, Uniform Random Sampling (URS), which efficiently preserves video features while minimizing computational complexity. Next, we propose a saliency detection technique that integrates both spatial and temporal information to identify visually static and dynamic geometric and luminance-salient regions. Finally, we fuse saliency and trajectory information to achieve more accurate viewport prediction. Extensive experimental results validate the superiority of our method over existing state-of-the-art schemes. To the best of our knowledge, this is the first comprehensive study of viewport prediction in volumetric video streaming. We also make the source code of this work publicly available.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 12","pages":"12816-12829"},"PeriodicalIF":11.1,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synergistic Fusion Network of Microscopic Hyperspectral and RGB Images for Multi-Perspective Segmentation 显微高光谱和RGB图像多视角分割的协同融合网络
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-11 DOI: 10.1109/TCSVT.2025.3578726
Lixin Zhang;Qian Wang
Accurate segmentation of diverse structures in pathological images is crucial for medical analysis. While widely used RGB images offer high spatial resolution, microscopic hyperspectral images (MHSIs) provide unique biomedical spectral signatures. Existing multi-modal segmentation methods, however, often suffer from insufficient uni-modal learning, ineffective cross-modal interaction, and nonadaptive multi-modal fusion. Therefore, we propose a novel synergistic multi-modal learning paradigm for co-registered RGB-MHSIs, instantiated within the Synergistic Fusion Network (SyFusNet) which comprises: modality-specific modules and objectives to ensure uni-modal feature extraction, the Mutual Knowledge Sharing Module (MKSM) for explicit cross-modal interaction, and the Adaptive Dual-level Co-decision Module (ADCM) for collaborative multi-modal segmentation. Alongside uni-modal learning, MKSM disentangles MHSI- and RGB-specific features into band- and position-aware guidance, respectively, sharing as cross-modal knowledge to enhance each other’s representations. To fuse multi-modal predictions, ADCM generates global attention from integrated multi-modal features to adaptively refine decision-level outputs, yielding reliable segmentation. Experiments demonstrate that SyFusNet outperforms state-of-the-art methods with statistical significance $boldsymbol {(p lt 0.01)}$ , achieving relative IoU gains of 9.35%, 4.63%, and 2.47% on the public PLGC, MDC, and WBC datasets, respectively, while also exhibiting strong generalizability and diagnostic potential through practical applications in multi-class segmentation and tumor regression grading.
病理图像中不同结构的准确分割对医学分析至关重要。虽然广泛使用的RGB图像提供高空间分辨率,但显微高光谱图像(MHSIs)提供独特的生物医学光谱特征。然而,现有的多模态分割方法存在单模态学习不足、跨模态交互效果不佳、多模态融合不自适应等问题。因此,我们为共同注册的RGB-MHSIs提出了一种新的协同多模态学习范式,在协同融合网络(SyFusNet)中进行了实例化,其中包括:特定于模态的模块和目标,以确保单模态特征提取,用于显式跨模态交互的相互知识共享模块(MKSM),以及用于协作多模态分割的自适应双级共同决策模块(ADCM)。除了单模态学习,MKSM还分别将MHSI和rgb特定的特征分解为波段和位置感知引导,作为跨模态知识共享以增强彼此的表示。为了融合多模态预测,ADCM从集成的多模态特征中产生全球关注,以自适应地改进决策级输出,产生可靠的分割。实验表明,SyFusNet优于最先进的方法,具有统计学显著性$boldsymbol {(p lt 0.01)}$,在公共PLGC, MDC和WBC数据集上分别实现了9.35%,4.63%和2.47%的相对IoU增益,同时在多类别分割和肿瘤回归分级的实际应用中也表现出很强的通用性和诊断潜力。
{"title":"Synergistic Fusion Network of Microscopic Hyperspectral and RGB Images for Multi-Perspective Segmentation","authors":"Lixin Zhang;Qian Wang","doi":"10.1109/TCSVT.2025.3578726","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3578726","url":null,"abstract":"Accurate segmentation of diverse structures in pathological images is crucial for medical analysis. While widely used RGB images offer high spatial resolution, microscopic hyperspectral images (MHSIs) provide unique biomedical spectral signatures. Existing multi-modal segmentation methods, however, often suffer from insufficient uni-modal learning, ineffective cross-modal interaction, and nonadaptive multi-modal fusion. Therefore, we propose a novel synergistic multi-modal learning paradigm for co-registered RGB-MHSIs, instantiated within the <underline>Sy</u>nergistic <underline>Fus</u>ion <underline>Net</u>work (SyFusNet) which comprises: modality-specific modules and objectives to ensure uni-modal feature extraction, the Mutual Knowledge Sharing Module (MKSM) for explicit cross-modal interaction, and the Adaptive Dual-level Co-decision Module (ADCM) for collaborative multi-modal segmentation. Alongside uni-modal learning, MKSM disentangles MHSI- and RGB-specific features into band- and position-aware guidance, respectively, sharing as cross-modal knowledge to enhance each other’s representations. To fuse multi-modal predictions, ADCM generates global attention from integrated multi-modal features to adaptively refine decision-level outputs, yielding reliable segmentation. Experiments demonstrate that SyFusNet outperforms state-of-the-art methods with statistical significance <inline-formula> <tex-math>$boldsymbol {(p lt 0.01)}$ </tex-math></inline-formula>, achieving relative IoU gains of 9.35%, 4.63%, and 2.47% on the public PLGC, MDC, and WBC datasets, respectively, while also exhibiting strong generalizability and diagnostic potential through practical applications in multi-class segmentation and tumor regression grading.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 12","pages":"12904-12917"},"PeriodicalIF":11.1,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stable Attribute Group Editing for Reliable Few-Shot Image Generation 稳定的属性组编辑,可靠的少拍图像生成
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-11 DOI: 10.1109/TCSVT.2025.3578670
Guanqi Ding;Xinzhe Han;Shuhui Wang;Xin Jin;Qingming Huang
Few-shot image generation aims to generate data of an unseen category based on only a few samples. Apart from basic content generation, a bunch of downstream applications hopefully benefit from this task, such as low-data detection and few-shot classification. To achieve this goal, the generated images should guarantee category retention for classification beyond the visual quality and diversity. In our preliminary work, we present an “editing-based” framework, Attribute Group Editing (AGE), for reliable few-shot image generation, which largely improves the performance compared with existing methods that require re-training a GAN with limited data. Nevertheless, AGE’s performance on downstream classification is not as satisfactory as expected. Furthermore, existing generative models suffer from similar issues. This paper focuses on addressing the issue of universal class inconsistency in all generative models. It not only improves AGE to enhance its ability to preserve class information but also conducts a comprehensive analysis of the causes of this problem in generative models from multiple perspectives, proposing potential directions for resolution. We first propose Stable Attribute Group Editing (SAGE) for more stable class-relevant image generation. SAGE corrects the inaccurate assumptions in AGE and leverages the distribution information from seen categories to accurately estimate the data distribution of unseen categories, thereby eliminating the class inconsistency issue in the generated data. We apply SAGE to both GANs and diffusion models to verify its flexibility and further achieve promising generation performance. Going one step further, we find that even though the generated images look photo-realistic and require no category-relevant editing, they are usually of limited help for downstream classification. We systematically discuss this issue from both the generation and classification perspectives, and propose to boost the downstream classification performance of SAGE by enhancing the pixel and frequency components. Extensive experiments provide valuable insights into extending image generation to wider downstream applications. Codes are available at https://github.com/UniBester/SAGE
few -shot image generation的目标是仅基于少量样本生成未知类别的数据。除了基本的内容生成之外,许多下游应用程序有望从该任务中受益,例如低数据检测和少镜头分类。为了实现这一目标,生成的图像应该在视觉质量和多样性之外保证分类的类别保留。在我们的初步工作中,我们提出了一个“基于编辑”的框架,即属性组编辑(AGE),用于可靠的少量图像生成,与需要使用有限数据重新训练GAN的现有方法相比,该框架在很大程度上提高了性能。然而,AGE在下游分类方面的表现并不如预期的那样令人满意。此外,现有的生成模型也存在类似的问题。本文的重点是解决所有生成模型中普遍类不一致的问题。不仅对AGE进行了改进,增强了其保存类信息的能力,而且从多个角度对生成模型中产生该问题的原因进行了全面分析,提出了可能的解决方向。我们首先提出稳定属性组编辑(SAGE)来生成更稳定的类相关图像。SAGE修正了AGE中不准确的假设,并利用来自已见类别的分布信息来准确估计未见类别的数据分布,从而消除了生成数据中的类不一致问题。我们将SAGE应用于gan和扩散模型,以验证其灵活性并进一步实现有希望的生成性能。更进一步,我们发现即使生成的图像看起来像照片,并且不需要与类别相关的编辑,它们通常对下游分类的帮助有限。本文从生成和分类两个方面对该问题进行了系统的讨论,并提出通过增强像元和频率分量来提高SAGE的下游分类性能。大量的实验为将图像生成扩展到更广泛的下游应用提供了有价值的见解。代码可在https://github.com/UniBester/SAGE上获得
{"title":"Stable Attribute Group Editing for Reliable Few-Shot Image Generation","authors":"Guanqi Ding;Xinzhe Han;Shuhui Wang;Xin Jin;Qingming Huang","doi":"10.1109/TCSVT.2025.3578670","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3578670","url":null,"abstract":"Few-shot image generation aims to generate data of an unseen category based on only a few samples. Apart from basic content generation, a bunch of downstream applications hopefully benefit from this task, such as low-data detection and few-shot classification. To achieve this goal, the generated images should guarantee category retention for classification beyond the visual quality and diversity. In our preliminary work, we present an “editing-based” framework, Attribute Group Editing (AGE), for reliable few-shot image generation, which largely improves the performance compared with existing methods that require re-training a GAN with limited data. Nevertheless, AGE’s performance on downstream classification is not as satisfactory as expected. Furthermore, existing generative models suffer from similar issues. This paper focuses on addressing the issue of universal class inconsistency in all generative models. It not only improves AGE to enhance its ability to preserve class information but also conducts a comprehensive analysis of the causes of this problem in generative models from multiple perspectives, proposing potential directions for resolution. We first propose Stable Attribute Group Editing (SAGE) for more stable class-relevant image generation. SAGE corrects the inaccurate assumptions in AGE and leverages the distribution information from seen categories to accurately estimate the data distribution of unseen categories, thereby eliminating the class inconsistency issue in the generated data. We apply SAGE to both GANs and diffusion models to verify its flexibility and further achieve promising generation performance. Going one step further, we find that even though the generated images look photo-realistic and require no category-relevant editing, they are usually of limited help for downstream classification. We systematically discuss this issue from both the generation and classification perspectives, and propose to boost the downstream classification performance of SAGE by enhancing the pixel and frequency components. Extensive experiments provide valuable insights into extending image generation to wider downstream applications. Codes are available at <uri>https://github.com/UniBester/SAGE</uri>","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 12","pages":"12719-12733"},"PeriodicalIF":11.1,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral Object Tracking With Spectral Information Prompt 具有光谱信息提示的高光谱目标跟踪
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-09 DOI: 10.1109/TCSVT.2025.3578153
Gang He;Long Gao;Langkun Chen;Yan Jiang;Weiying Xie;Yunsong Li
Hyperspectral videos contain a larger number of spectral bands, providing extensive spectral information and material identification capabilities. This advantage confers hyperspectral trackers to achieve superior performance in challenging tracking scenarios. However, the limited availability of hyperspectral training data and the inability of existing algorithms to fully exploit hyperspectral information restrict the tracking performance. To address this issue, a novel framework, Spectral Prompt-based Hyperspectral Object Tracking (SP-HST), is proposed. SP-HST leverages a RGB tracking network as the main branch for feature extraction and tracking, which accounts for more than 98% of the total parameters and remains frozen during the training procedure. Additionally, the Spectral Prompt Learning (SPL) branch, comprising multiple lightweight prompt blocks, is introduced to generate complementary spectral representations as the prompt. The prompts contain abundant spectral information from hyperspectral data, enhancing the discriminative ability of features within the main branch. Furthermore, the Complementary Weight Learning (CWL) is employed to calculate the importance of spectral information from different prompts, enabling the features for hyperspectral object tracking to contain more spectral information that is absent in the feature of the main branch. By utilizing the spectral information as prompt, the number of trainable parameters is less than 2% of that in the tracking network, and the convergence is reached in 12 training epoch. Extensive experiments demonstrate the superiority of SP-HST, achieving a new state-of-the-art tracking performance, 71.3% of the AUC score on the HOTC dataset and 96.7% of the DP@20P score on the IMEC25 dataset. The code will be released at https://github.com/lgao001/SP-HST
高光谱视频包含大量的光谱带,提供广泛的光谱信息和物质识别能力。这一优势使高光谱跟踪器在具有挑战性的跟踪场景中实现卓越的性能。然而,高光谱训练数据的有限可用性和现有算法无法充分利用高光谱信息限制了跟踪性能。为了解决这一问题,提出了一种基于光谱提示的高光谱目标跟踪(SP-HST)框架。SP-HST利用RGB跟踪网络作为特征提取和跟踪的主要分支,占总参数的98%以上,并且在训练过程中保持冻结状态。此外,引入了由多个轻量级提示块组成的光谱提示学习(SPL)分支,以生成互补的光谱表示作为提示。提示符包含了丰富的高光谱数据光谱信息,增强了主分支内特征的判别能力。此外,利用互补权重学习(CWL)计算不同提示的光谱信息的重要性,使高光谱目标跟踪的特征能够包含更多主分支特征中缺失的光谱信息。利用谱信息作为提示,可训练参数的数量少于跟踪网络中可训练参数的2%,在12个训练历元内达到收敛。大量的实验证明了SP-HST的优势,实现了新的最先进的跟踪性能,在HOTC数据集上的AUC得分为71.3%,在IMEC25数据集上的DP@20P得分为96.7%。代码将在https://github.com/lgao001/SP-HST上发布
{"title":"Hyperspectral Object Tracking With Spectral Information Prompt","authors":"Gang He;Long Gao;Langkun Chen;Yan Jiang;Weiying Xie;Yunsong Li","doi":"10.1109/TCSVT.2025.3578153","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3578153","url":null,"abstract":"Hyperspectral videos contain a larger number of spectral bands, providing extensive spectral information and material identification capabilities. This advantage confers hyperspectral trackers to achieve superior performance in challenging tracking scenarios. However, the limited availability of hyperspectral training data and the inability of existing algorithms to fully exploit hyperspectral information restrict the tracking performance. To address this issue, a novel framework, Spectral Prompt-based Hyperspectral Object Tracking (SP-HST), is proposed. SP-HST leverages a RGB tracking network as the main branch for feature extraction and tracking, which accounts for more than 98% of the total parameters and remains frozen during the training procedure. Additionally, the Spectral Prompt Learning (SPL) branch, comprising multiple lightweight prompt blocks, is introduced to generate complementary spectral representations as the prompt. The prompts contain abundant spectral information from hyperspectral data, enhancing the discriminative ability of features within the main branch. Furthermore, the Complementary Weight Learning (CWL) is employed to calculate the importance of spectral information from different prompts, enabling the features for hyperspectral object tracking to contain more spectral information that is absent in the feature of the main branch. By utilizing the spectral information as prompt, the number of trainable parameters is less than 2% of that in the tracking network, and the convergence is reached in 12 training epoch. Extensive experiments demonstrate the superiority of SP-HST, achieving a new state-of-the-art tracking performance, 71.3% of the AUC score on the HOTC dataset and 96.7% of the DP@20P score on the IMEC25 dataset. The code will be released at <uri>https://github.com/lgao001/SP-HST</uri>","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 12","pages":"12636-12651"},"PeriodicalIF":11.1,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Circuits and Systems Society Information IEEE电路与系统学会信息
IF 8.3 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-09 DOI: 10.1109/TCSVT.2025.3573482
{"title":"IEEE Circuits and Systems Society Information","authors":"","doi":"10.1109/TCSVT.2025.3573482","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3573482","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 6","pages":"C3-C3"},"PeriodicalIF":8.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11028137","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144243585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Circuits and Systems for Video Technology Publication Information IEEE视频技术电路与系统汇刊
IF 8.3 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-09 DOI: 10.1109/TCSVT.2025.3573480
{"title":"IEEE Transactions on Circuits and Systems for Video Technology Publication Information","authors":"","doi":"10.1109/TCSVT.2025.3573480","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3573480","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 6","pages":"C2-C2"},"PeriodicalIF":8.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11028632","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144243658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reversible Data Hiding in Encrypted Images With Adaptive Multi-Directional MED and Huffman Code Based on Interval-Wise Dynamic Prediction Axes 基于区间动态预测轴的自适应多向MED和Huffman编码加密图像中的可逆数据隐藏
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-04 DOI: 10.1109/TCSVT.2025.3576344
Shuai Yuan;Guangyong Gao;Yimin Yu;Zhihua Xia
With the popularization of digital information, reversible data hiding in ciphertext has become a critical research focus in privacy protection in cloud storage. A reversible data hiding method for encrypted images is proposed: Reversible Data Hiding in Encrypted Images with Adaptive Multi-directional MED and Huffman Code based on Interval-Wise Dynamic Prediction Axes (RDHEI-AHIDA). Firstly, the original image is predicted by the gradient Adaptive Multi-Directional Median Edge Detector (AM-MED) to obtain the critical gradient and the position of the Interval-wise Dynamic Prediction Axes (IDP-Axes). Then, information bits are allocated at intervals on the IDP-Axes. Combining the determined position of the IDP-Axes and the critical gradient, the prediction error values of the original image are calculated and recorded. After the image is encrypted, according to the distribution of prediction error values, an adaptive Huffman code rule is established, and pixel marking, classification and auxiliary information embedding are carried out. Finally, the secret data is embedded by the bit replacement method. Compared with the state-of-the-art RDHEI methods, experimental results show that RDHEI-AHIDA not only provides a higher pure payload while ensuring security but also exhibits certain robustness.
随着数字信息的普及,隐藏在密文中的可逆数据已成为云存储中隐私保护的重要研究热点。提出了一种加密图像的可逆数据隐藏方法:基于间隔动态预测轴(rdhi - ahida)的自适应多向MED和霍夫曼码加密图像的可逆数据隐藏。首先,利用梯度自适应多向中值边缘检测器(AM-MED)对原始图像进行预测,得到临界梯度和区间动态预测轴(IDP-Axes)的位置;然后,在idp轴上按间隔分配信息位。结合确定的idp轴位置和临界梯度,计算并记录原始图像的预测误差值。对图像进行加密后,根据预测误差值的分布,建立自适应霍夫曼编码规则,进行像素标记、分类和辅助信息嵌入。最后,采用位替换方法嵌入秘密数据。实验结果表明,与目前最先进的RDHEI方法相比,RDHEI- ahida在保证安全性的同时提供了更高的纯有效载荷,并且具有一定的鲁棒性。
{"title":"Reversible Data Hiding in Encrypted Images With Adaptive Multi-Directional MED and Huffman Code Based on Interval-Wise Dynamic Prediction Axes","authors":"Shuai Yuan;Guangyong Gao;Yimin Yu;Zhihua Xia","doi":"10.1109/TCSVT.2025.3576344","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3576344","url":null,"abstract":"With the popularization of digital information, reversible data hiding in ciphertext has become a critical research focus in privacy protection in cloud storage. A reversible data hiding method for encrypted images is proposed: Reversible Data Hiding in Encrypted Images with Adaptive Multi-directional MED and Huffman Code based on Interval-Wise Dynamic Prediction Axes (RDHEI-AHIDA). Firstly, the original image is predicted by the gradient Adaptive Multi-Directional Median Edge Detector (AM-MED) to obtain the critical gradient and the position of the Interval-wise Dynamic Prediction Axes (IDP-Axes). Then, information bits are allocated at intervals on the IDP-Axes. Combining the determined position of the IDP-Axes and the critical gradient, the prediction error values of the original image are calculated and recorded. After the image is encrypted, according to the distribution of prediction error values, an adaptive Huffman code rule is established, and pixel marking, classification and auxiliary information embedding are carried out. Finally, the secret data is embedded by the bit replacement method. Compared with the state-of-the-art RDHEI methods, experimental results show that RDHEI-AHIDA not only provides a higher pure payload while ensuring security but also exhibits certain robustness.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 11","pages":"11708-11722"},"PeriodicalIF":11.1,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BiSeR-LMA: A Bidirectional Semantic Reasoning and Large Model Enhancement Approach for Text-Video Cross-Modal Retrieval 文本-视频跨模态检索的双向语义推理和大模型增强方法
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-04 DOI: 10.1109/TCSVT.2025.3576619
Ming Jin;Lei Zhu;Richang Hong
Video, as an information carrier, provides a vast amount of important information to people. Therefore, the method of obtaining video becomes particularly important, which drives the research on text-video cross-modal retrieval technology. However, current text-video cross-modal retrieval models still face several issues. First, these models do not fully utilize the powerful reasoning and generative capabilities of large models to address the issues of missing critical objects and insufficient high-quality video-text paired training data. Second, existing retrieval models do not adequately research the bidirectional cross-modal semantic interaction and reasoning mechanism, which hinders the ability to fully capture and learn the implicit semantic features between different modalities. To address these issues, we propose an innovative bidirectional semantic reasoning and large model data augmentation cross-modal retrieval model (BiSeR-LMA). This model first leverages the strong reasoning and generative capabilities of large models to perform semantic reasoning on the textual descriptions of videos, then generates multiple semantically rich video frames, thereby compensating for the missing critical objects in the original video and improving the quality of video-text paired training data. Second, we design a bidirectional text-video semantic reasoning module, which uses features from one modality as auxiliary information to assist the model in reasoning the implicit semantic information of another modality. This enhances the model’s capability to establish semantic relationships and perform reasoning on implicit semantics, promoting text-video semantic alignment. Finally, we verify the effectiveness of the proposed cross-modal retrieval model on the MSR-VTT, LSMDC, and MSVD datasets.
视频作为一种信息载体,为人们提供了大量的重要信息。因此,获取视频的方法就显得尤为重要,这就推动了文本-视频跨模态检索技术的研究。然而,目前的文本-视频跨模态检索模型仍然面临着一些问题。首先,这些模型没有充分利用大型模型强大的推理和生成能力来解决关键对象缺失和高质量视频文本配对训练数据不足的问题。其次,现有检索模型没有充分研究双向跨模态语义交互和推理机制,阻碍了对不同模态之间隐含语义特征的充分捕获和学习。为了解决这些问题,我们提出了一种创新的双向语义推理和大模型数据增强跨模态检索模型(BiSeR-LMA)。该模型首先利用大型模型强大的推理和生成能力,对视频的文本描述进行语义推理,然后生成多个语义丰富的视频帧,从而补偿原始视频中缺失的关键对象,提高视频-文本配对训练数据的质量。其次,我们设计了一个双向文本-视频语义推理模块,该模块使用一种模态的特征作为辅助信息,帮助模型推理另一种模态的隐含语义信息。这增强了模型建立语义关系和对隐式语义进行推理的能力,促进了文本-视频语义对齐。最后,我们在MSR-VTT、LSMDC和MSVD数据集上验证了所提出的跨模态检索模型的有效性。
{"title":"BiSeR-LMA: A Bidirectional Semantic Reasoning and Large Model Enhancement Approach for Text-Video Cross-Modal Retrieval","authors":"Ming Jin;Lei Zhu;Richang Hong","doi":"10.1109/TCSVT.2025.3576619","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3576619","url":null,"abstract":"Video, as an information carrier, provides a vast amount of important information to people. Therefore, the method of obtaining video becomes particularly important, which drives the research on text-video cross-modal retrieval technology. However, current text-video cross-modal retrieval models still face several issues. First, these models do not fully utilize the powerful reasoning and generative capabilities of large models to address the issues of missing critical objects and insufficient high-quality video-text paired training data. Second, existing retrieval models do not adequately research the bidirectional cross-modal semantic interaction and reasoning mechanism, which hinders the ability to fully capture and learn the implicit semantic features between different modalities. To address these issues, we propose an innovative bidirectional semantic reasoning and large model data augmentation cross-modal retrieval model (BiSeR-LMA). This model first leverages the strong reasoning and generative capabilities of large models to perform semantic reasoning on the textual descriptions of videos, then generates multiple semantically rich video frames, thereby compensating for the missing critical objects in the original video and improving the quality of video-text paired training data. Second, we design a bidirectional text-video semantic reasoning module, which uses features from one modality as auxiliary information to assist the model in reasoning the implicit semantic information of another modality. This enhances the model’s capability to establish semantic relationships and perform reasoning on implicit semantics, promoting text-video semantic alignment. Finally, we verify the effectiveness of the proposed cross-modal retrieval model on the MSR-VTT, LSMDC, and MSVD datasets.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 11","pages":"11655-11666"},"PeriodicalIF":11.1,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diversifying Latent Flows for Safety-Critical Scenarios Generation With CARLA Simulator 利用CARLA模拟器实现安全关键场景生成的潜在流多样化
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-04 DOI: 10.1109/TCSVT.2025.3576354
Dingcheng Gao;Yanjun Qin;Xiaoming Tao;Jianhua Lu
The likelihood of encountering scenarios that lead to accidents, namely safety-critical scenarios, is minimal compared to long-term safe driving environments. The generation of repeatable and scalable safety-critical scenarios is essential for the advancement of human and autonomous driving capabilities. Compared with the high complexity and low practicality of existing scenario generation methods, in this paper we propose a real-time approach to automatically generate challenging scenarios and instantiate them in a CARLA-based simulator. First, the safety-critical scenario is decomposed into a perturbed and optimized vehicle trajectory and the remaining reusable Unreal Engine assets based on a hierarchical model. Second, a model that is based on a graph conditional variational autoencoder (VAE) is employed to predict future trajectories and head angles based on past information. Third, the safety-critical scene generation model is used to enhance the diversity of the scene by diversifying the latent variables over a pre-trained trajectory representation model. Finally, the trajectories of real-world vehicles are placed into the simulator by adapting them to enable the generation of safety-critical scenes in a three-dimensional environment. The results demonstrate that the proposed approach generates scenarios that are more plausible than those generated by the baselines, with a performance improvement of over 10% in collision metrics for scenario generation. The research facilitates the simplification of the long-tail scenario construction process for autonomous vehicles, which in turn facilitates the optimization of algorithms such as autonomous trajectory planning.
与长期安全驾驶环境相比,遇到导致事故的场景(即安全关键场景)的可能性很小。产生可重复和可扩展的安全关键场景对于人类和自动驾驶能力的进步至关重要。针对现有场景生成方法的高复杂性和低实用性,本文提出了一种实时自动生成挑战性场景并在基于carla的模拟器中实例化的方法。首先,将安全关键场景分解为扰动和优化的车辆轨迹以及基于分层模型的剩余可重用虚幻引擎资产。其次,采用基于图条件变分自编码器(VAE)的模型,根据过去的信息预测未来的轨迹和头角。第三,使用安全关键场景生成模型,通过在预训练的轨迹表示模型上分散潜在变量来增强场景的多样性。最后,将现实世界车辆的轨迹放入模拟器中,使其能够在三维环境中生成安全关键场景。结果表明,该方法生成的场景比基线生成的场景更可信,场景生成的碰撞度量性能提高了10%以上。该研究简化了自动驾驶汽车长尾场景构建过程,进而优化了自动驾驶轨迹规划等算法。
{"title":"Diversifying Latent Flows for Safety-Critical Scenarios Generation With CARLA Simulator","authors":"Dingcheng Gao;Yanjun Qin;Xiaoming Tao;Jianhua Lu","doi":"10.1109/TCSVT.2025.3576354","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3576354","url":null,"abstract":"The likelihood of encountering scenarios that lead to accidents, namely safety-critical scenarios, is minimal compared to long-term safe driving environments. The generation of repeatable and scalable safety-critical scenarios is essential for the advancement of human and autonomous driving capabilities. Compared with the high complexity and low practicality of existing scenario generation methods, in this paper we propose a real-time approach to automatically generate challenging scenarios and instantiate them in a CARLA-based simulator. First, the safety-critical scenario is decomposed into a perturbed and optimized vehicle trajectory and the remaining reusable Unreal Engine assets based on a hierarchical model. Second, a model that is based on a graph conditional variational autoencoder (VAE) is employed to predict future trajectories and head angles based on past information. Third, the safety-critical scene generation model is used to enhance the diversity of the scene by diversifying the latent variables over a pre-trained trajectory representation model. Finally, the trajectories of real-world vehicles are placed into the simulator by adapting them to enable the generation of safety-critical scenes in a three-dimensional environment. The results demonstrate that the proposed approach generates scenarios that are more plausible than those generated by the baselines, with a performance improvement of over 10% in collision metrics for scenario generation. The research facilitates the simplification of the long-tail scenario construction process for autonomous vehicles, which in turn facilitates the optimization of algorithms such as autonomous trajectory planning.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 11","pages":"11723-11736"},"PeriodicalIF":11.1,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistic-Guided Difference Enhancement Graph Transformer for Unsupervised Change Detection in PolSAR Images 用于PolSAR图像无监督变化检测的统计导向差分增强图转换器
IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-02 DOI: 10.1109/TCSVT.2025.3575082
Dazhi Xu;Ming Li;Yan Wu;Peng Zhang;Xinyue Xin
Polarimetric synthetic aperture radar (PolSAR) image change detection (CD) aims to accurately analyze the difference and detect changes in PolSAR images. Recently, graph transformer (GT), which combines the advantages of graph convolutional network and transformer, has increasingly attracted attention in the field of remote sensing. However, the direct application of GT for PolSAR image CD with limited training samples is challenging owing to polarimetric scattering confusion and random speckle noise. Here, we propose a novel unsupervised representation learning framework for CD in PolSAR images, named statistic-guided difference enhancement GT (SDEGT). Our motivation is that polarimetric statistics can effectively guide GT to extract robust and highly discriminative features from the raw polarimetric graphs and thus accurately detect changes. The SDEGT follows the architecture based on neighborhood aggregation GT and innovatively introduces polarimetric statistics to guide feature difference enhancement, thereby capturing the structural interaction between graph nodes and aggregating the local-to-global change correlations at low computational cost. First, SDEGT innovatively introduces noise-robust polarimetric statistics to improve its noise suppression ability and learn sufficient change-aware features from the PolSAR data. Subsequently, guided by the polarimetric statistical difference, a difference enhancement module (DEM) is designed and cleverly embedded in the SDEGT to adaptively enhance the difference between changed and unchanged nodes, thus improving the discrimination of the change-aware features. Finally, symmetric cross-entropy (SCE) is employed to facilitate the robust learning of SDEGT and attenuate the detrimental effect of label noise. Visual and quantitative experimental results on five measured PolSAR datasets with different scenes and dimensions demonstrate the competitiveness of our SDEGT over other state-of-the-art methods.
偏振合成孔径雷达(PolSAR)图像变化检测(CD)的目的是准确分析和检测PolSAR图像的差异和变化。近年来,结合了图卷积网络和变压器优点的图变压器(GT)越来越受到遥感领域的关注。然而,由于极化散射混淆和随机散斑噪声的存在,将GT直接应用于训练样本有限的PolSAR图像CD是一个挑战。在这里,我们提出了一种新的PolSAR图像CD的无监督表示学习框架,称为统计引导差分增强GT (SDEGT)。我们的动机是偏振统计可以有效地指导GT从原始偏振图中提取鲁棒性和高度判别性的特征,从而准确地检测变化。SDEGT遵循基于邻域聚集GT的架构,创新地引入极化统计来指导特征差异增强,从而以较低的计算成本捕获图节点之间的结构相互作用,聚合局部到全局的变化相关性。首先,SDEGT创新地引入了噪声鲁棒极化统计,以提高其噪声抑制能力,并从PolSAR数据中学习足够的变化感知特征。随后,以极化统计差分为指导,设计差分增强模块(DEM)并巧妙嵌入到SDEGT中,自适应增强变化节点与未变化节点之间的差异,从而提高对变化感知特征的识别能力。最后,采用对称交叉熵(SCE)促进SDEGT的鲁棒学习,并减弱标签噪声的不利影响。在5个不同场景和维度的PolSAR测量数据集上的视觉和定量实验结果表明,我们的SDEGT与其他最先进的方法相比具有竞争力。
{"title":"Statistic-Guided Difference Enhancement Graph Transformer for Unsupervised Change Detection in PolSAR Images","authors":"Dazhi Xu;Ming Li;Yan Wu;Peng Zhang;Xinyue Xin","doi":"10.1109/TCSVT.2025.3575082","DOIUrl":"https://doi.org/10.1109/TCSVT.2025.3575082","url":null,"abstract":"Polarimetric synthetic aperture radar (PolSAR) image change detection (CD) aims to accurately analyze the difference and detect changes in PolSAR images. Recently, graph transformer (GT), which combines the advantages of graph convolutional network and transformer, has increasingly attracted attention in the field of remote sensing. However, the direct application of GT for PolSAR image CD with limited training samples is challenging owing to polarimetric scattering confusion and random speckle noise. Here, we propose a novel unsupervised representation learning framework for CD in PolSAR images, named statistic-guided difference enhancement GT (SDEGT). Our motivation is that polarimetric statistics can effectively guide GT to extract robust and highly discriminative features from the raw polarimetric graphs and thus accurately detect changes. The SDEGT follows the architecture based on neighborhood aggregation GT and innovatively introduces polarimetric statistics to guide feature difference enhancement, thereby capturing the structural interaction between graph nodes and aggregating the local-to-global change correlations at low computational cost. First, SDEGT innovatively introduces noise-robust polarimetric statistics to improve its noise suppression ability and learn sufficient change-aware features from the PolSAR data. Subsequently, guided by the polarimetric statistical difference, a difference enhancement module (DEM) is designed and cleverly embedded in the SDEGT to adaptively enhance the difference between changed and unchanged nodes, thus improving the discrimination of the change-aware features. Finally, symmetric cross-entropy (SCE) is employed to facilitate the robust learning of SDEGT and attenuate the detrimental effect of label noise. Visual and quantitative experimental results on five measured PolSAR datasets with different scenes and dimensions demonstrate the competitiveness of our SDEGT over other state-of-the-art methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 11","pages":"11667-11684"},"PeriodicalIF":11.1,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Circuits and Systems for Video Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1