首页 > 最新文献

Journal of Visual Communication and Image Representation最新文献

英文 中文
Machine learning and transformers for thyroid carcinoma diagnosis 甲状腺癌诊断的机器学习和变压器
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-11 DOI: 10.1016/j.jvcir.2025.104668
Yassine Habchi , Hamza Kheddar , Yassine Himeur , Mohamed Chahine Ghanem
Thyroid carcinoma (TC) remains a critical health challenge, where timely and accurate diagnosis is essential for improving patient outcomes. This review provides a comprehensive examination of artificial intelligence (AI) applications — including machine learning (ML), deep learning (DL), and emerging transformer-based approaches — in the detection and classification of TC. We first outline standardized evaluation metrics and analyze publicly available datasets, highlighting their limitations in diversity, annotation quality, and representativeness. Next, we survey AI-driven diagnostic frameworks across three domains: classification, segmentation, and prediction, with emphasis on ultrasound imaging, histopathology, and genomics. A comparative analysis of ML and DL approaches illustrates their respective strengths, such as interpretability in smaller datasets versus automated feature extraction in large-scale imaging tasks. Advanced methods leveraging vision transformers (ViT) and large language models (LLMs) are discussed alongside traditional models, situating them within a broader ecosystem of feature engineering, ensemble learning, and hybrid strategies. We also examine key challenges — imbalanced datasets, computational demands, model generalizability, and ethical concerns — before outlining future research directions, including explainable AI, federated and privacy-preserving learning, reinforcement learning, and integration with the Internet of Medical Things (IoMT). By bridging technical insights with clinical considerations, this review establishes a roadmap for next-generation TC diagnostics and highlights pathways toward robust, patient-centric, and ethically responsible AI deployment in oncology.
甲状腺癌(TC)仍然是一个严重的健康挑战,及时准确的诊断对于改善患者的预后至关重要。本文综述了人工智能(AI)在TC检测和分类中的应用,包括机器学习(ML)、深度学习(DL)和新兴的基于变压器的方法。我们首先概述了标准化的评估指标,并分析了公开可用的数据集,强调了它们在多样性、注释质量和代表性方面的局限性。接下来,我们调查了人工智能驱动的诊断框架在三个领域:分类、分割和预测,重点是超声成像、组织病理学和基因组学。ML和DL方法的比较分析说明了它们各自的优势,例如较小数据集的可解释性与大规模成像任务中的自动特征提取。利用视觉变换(ViT)和大型语言模型(llm)的高级方法与传统模型一起讨论,将它们置于更广泛的特征工程、集成学习和混合策略的生态系统中。在概述未来的研究方向之前,我们还研究了关键挑战-不平衡的数据集,计算需求,模型通用性和伦理问题-包括可解释的人工智能,联合和隐私保护学习,强化学习以及与医疗物联网(IoMT)的集成。通过将技术见解与临床考虑相结合,本综述为下一代TC诊断建立了路线图,并强调了在肿瘤学中实现稳健、以患者为中心和道德负责任的人工智能部署的途径。
{"title":"Machine learning and transformers for thyroid carcinoma diagnosis","authors":"Yassine Habchi ,&nbsp;Hamza Kheddar ,&nbsp;Yassine Himeur ,&nbsp;Mohamed Chahine Ghanem","doi":"10.1016/j.jvcir.2025.104668","DOIUrl":"10.1016/j.jvcir.2025.104668","url":null,"abstract":"<div><div>Thyroid carcinoma (TC) remains a critical health challenge, where timely and accurate diagnosis is essential for improving patient outcomes. This review provides a comprehensive examination of artificial intelligence (AI) applications — including machine learning (ML), deep learning (DL), and emerging transformer-based approaches — in the detection and classification of TC. We first outline standardized evaluation metrics and analyze publicly available datasets, highlighting their limitations in diversity, annotation quality, and representativeness. Next, we survey AI-driven diagnostic frameworks across three domains: classification, segmentation, and prediction, with emphasis on ultrasound imaging, histopathology, and genomics. A comparative analysis of ML and DL approaches illustrates their respective strengths, such as interpretability in smaller datasets versus automated feature extraction in large-scale imaging tasks. Advanced methods leveraging vision transformers (ViT) and large language models (LLMs) are discussed alongside traditional models, situating them within a broader ecosystem of feature engineering, ensemble learning, and hybrid strategies. We also examine key challenges — imbalanced datasets, computational demands, model generalizability, and ethical concerns — before outlining future research directions, including explainable AI, federated and privacy-preserving learning, reinforcement learning, and integration with the Internet of Medical Things (IoMT). By bridging technical insights with clinical considerations, this review establishes a roadmap for next-generation TC diagnostics and highlights pathways toward robust, patient-centric, and ethically responsible AI deployment in oncology.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104668"},"PeriodicalIF":3.1,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145797410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind compressed image diffusion restoration based on content prior and dense residual connection driven transformer 基于内容先验和密集残差连接驱动变压器的压缩图像扩散盲恢复
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-10 DOI: 10.1016/j.jvcir.2025.104674
Shuang Yue, Zhe Chen, Fuliang Yin
JPEG blind compressed image restoration (CIR) aims to restore high-quality images from compressed low-quality images, which is a long-standing low-level visual problem in the image processing. The existing blind CIR methods often overlook basic content details, leading to degradation of restoration quality of blind compressed images. To address this issue, this paper proposes a blind compressed image diffusion restoration model (BCDR) based on content prior and dense residual connection driven transformer. Specifically, we first utilize the image content restoration prior (ICP) learned in low-quality and high-quality images to refine the detail features. Then, the diffusion model estimator is used to reconstruct the texture of images and enhance the visual coherence. Finally, the dense residual connection is applied to capture global information and generate more realistic image details. The proposed model can greatly improve the image quality of blind compressed images and perform well in restoring image content details. The experimental results demonstrate that the proposed method exhibits excellent performance in both the benchmark dataset and the blind CIR task in real-world scenarios.
JPEG盲压缩图像恢复(CIR)旨在将压缩后的低质量图像还原为高质量图像,这是图像处理中长期存在的低级视觉问题。现有的盲CIR方法往往忽略了基本的内容细节,导致盲压缩图像的恢复质量下降。针对这一问题,提出了一种基于内容先验和密集残余连接驱动变压器的盲压缩图像扩散恢复模型(BCDR)。具体来说,我们首先利用在低质量和高质量图像中学习到的图像内容恢复先验(ICP)来细化细节特征。然后,利用扩散模型估计器重构图像纹理,增强图像的视觉一致性;最后,利用密集残差连接获取全局信息,生成更真实的图像细节。该模型可以大大提高盲压缩图像的图像质量,并能很好地还原图像内容细节。实验结果表明,该方法在基准数据集和真实场景的盲CIR任务中都表现出优异的性能。
{"title":"Blind compressed image diffusion restoration based on content prior and dense residual connection driven transformer","authors":"Shuang Yue,&nbsp;Zhe Chen,&nbsp;Fuliang Yin","doi":"10.1016/j.jvcir.2025.104674","DOIUrl":"10.1016/j.jvcir.2025.104674","url":null,"abstract":"<div><div>JPEG blind compressed image restoration (CIR) aims to restore high-quality images from compressed low-quality images, which is a long-standing low-level visual problem in the image processing. The existing blind CIR methods often overlook basic content details, leading to degradation of restoration quality of blind compressed images. To address this issue, this paper proposes a blind compressed image diffusion restoration model (BCDR) based on content prior and dense residual connection driven transformer. Specifically, we first utilize the image content restoration prior (ICP) learned in low-quality and high-quality images to refine the detail features. Then, the diffusion model estimator is used to reconstruct the texture of images and enhance the visual coherence. Finally, the dense residual connection is applied to capture global information and generate more realistic image details. The proposed model can greatly improve the image quality of blind compressed images and perform well in restoring image content details. The experimental results demonstrate that the proposed method exhibits excellent performance in both the benchmark dataset and the blind CIR task in real-world scenarios.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104674"},"PeriodicalIF":3.1,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145748548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCMAE: A dual-branch contrastive masked autoencoder for 3D object detection DCMAE:用于3D对象检测的双支路对比掩码自动编码器
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-05 DOI: 10.1016/j.jvcir.2025.104675
Xun Liu, Fei Wang, Zifeng Chen, Xingzhen Dong
Learning robust local features from point clouds is crucial for 3D object detection; however, this task remains challenging due to the irregular structure, data sparsity, and lack of explicit topology in 3D scenes. Existing methods tend to focus either on local geometric or global semantic feature learning, making the joint optimization of both challenging. To address these problems, we propose a novel dual-branch local and global feature learning network for 3D object detection, namely DCMAE. It consists of a reconstruction branch and a contrastive branch. The reconstruction branch employs an online encoder to reconstruct the occluded region from visible points, focusing on capturing local geometric features. The contrastive branch uses a momentum encoder and patch-level contrastive learning to enhance the discriminability of local features by aligning them with the global context. In addition, we propose a Self- and Cross-Attention (SCA) decoder to alleviate masked-shape leakage from point coordinates by separating masked-token interactions while preserving semantic relationships with visible points. Experiments on SUN RGB-D and KITTI datasets demonstrate that DCMAE improves mAP by 1.3% and 2.6%, respectively. Compared with PiMAE, DCMAE achieves additional gains of 0.7% on SUN RGB-D and 1.1% on KITTI, showing superior performance in both indoor and outdoor scenarios.
从点云中学习鲁棒的局部特征对于3D目标检测至关重要;然而,由于三维场景中的不规则结构、数据稀疏性和缺乏明确的拓扑结构,这项任务仍然具有挑战性。现有的方法往往侧重于局部几何特征学习或全局语义特征学习,这使得两者的联合优化具有挑战性。为了解决这些问题,我们提出了一种新的用于三维物体检测的双分支局部和全局特征学习网络,即DCMAE。它由重建分支和对比分支组成。重建分支采用在线编码器从可见点重建被遮挡区域,重点捕获局部几何特征。对比分支使用动量编码器和补丁级对比学习,通过将局部特征与全局上下文对齐来增强局部特征的可辨别性。此外,我们提出了一个自我和交叉注意(SCA)解码器,通过分离屏蔽令牌交互来减轻点坐标的屏蔽形状泄漏,同时保留与可见点的语义关系。在SUN RGB-D和KITTI数据集上的实验表明,DCMAE分别提高了1.3%和2.6%的mAP。与PiMAE相比,DCMAE在SUN RGB-D上实现了0.7%的额外增益,在KITTI上实现了1.1%的额外增益,在室内和室外场景下都表现出优越的性能。
{"title":"DCMAE: A dual-branch contrastive masked autoencoder for 3D object detection","authors":"Xun Liu,&nbsp;Fei Wang,&nbsp;Zifeng Chen,&nbsp;Xingzhen Dong","doi":"10.1016/j.jvcir.2025.104675","DOIUrl":"10.1016/j.jvcir.2025.104675","url":null,"abstract":"<div><div>Learning robust local features from point clouds is crucial for 3D object detection; however, this task remains challenging due to the irregular structure, data sparsity, and lack of explicit topology in 3D scenes. Existing methods tend to focus either on local geometric or global semantic feature learning, making the joint optimization of both challenging. To address these problems, we propose a novel dual-branch local and global feature learning network for 3D object detection, namely DCMAE. It consists of a reconstruction branch and a contrastive branch. The reconstruction branch employs an online encoder to reconstruct the occluded region from visible points, focusing on capturing local geometric features. The contrastive branch uses a momentum encoder and patch-level contrastive learning to enhance the discriminability of local features by aligning them with the global context. In addition, we propose a Self- and Cross-Attention (SCA) decoder to alleviate masked-shape leakage from point coordinates by separating masked-token interactions while preserving semantic relationships with visible points. Experiments on SUN RGB-D and KITTI datasets demonstrate that DCMAE improves mAP by 1.3% and 2.6%, respectively. Compared with PiMAE, DCMAE achieves additional gains of 0.7% on SUN RGB-D and 1.1% on KITTI, showing superior performance in both indoor and outdoor scenarios.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104675"},"PeriodicalIF":3.1,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145748547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-aware filter using self-guided information 使用自引导信息的结构感知过滤器
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-05 DOI: 10.1016/j.jvcir.2025.104677
Mukhalad Al-nasrawi , Guang Deng , Riyadh Nazar Ali Algburi
Structure-aware smoothing has proven to be a fundamental tool for a wide range of applications in computer vision and image processing tasks. In this paper, we propose an efficient and effective filter to smooth out texture. The proposed filter is implemented in three steps. (1) An image with a weak textured component is generated by applying a linear filter on the original image. (2) A windowed inherent variation is employed to discriminate textures and structures, and is used as a structure indicator. (3) Pixels on the flat and structure areas, guided by the structure indicator, are locally selected from the input and blurred image (from step (1)) through a local interpolation process. We have also presented detailed experimental analysis of the proposed technique including how the filter is related to self-guided filter and anisotropic filter. We demonstrate that the proposed technique outperforms several state-of-the-art methods in subjective and objective evaluation.
结构感知平滑已被证明是在计算机视觉和图像处理任务中广泛应用的基本工具。在本文中,我们提出了一种高效的平滑纹理的滤波器。该滤波器分三步实现。(1)对原始图像进行线性滤波,生成具有弱纹理分量的图像。(2)利用带窗的固有变异来区分纹理和结构,并将其作为结构指标。(3)平面和结构区域上的像素点,在结构指标的引导下,通过局部插值处理,从输入和模糊图像(从步骤(1))中局部选取。我们还对所提出的技术进行了详细的实验分析,包括滤波器与自导向滤波器和各向异性滤波器的关系。我们证明,提出的技术优于几个国家的最先进的方法在主观和客观的评价。
{"title":"Structure-aware filter using self-guided information","authors":"Mukhalad Al-nasrawi ,&nbsp;Guang Deng ,&nbsp;Riyadh Nazar Ali Algburi","doi":"10.1016/j.jvcir.2025.104677","DOIUrl":"10.1016/j.jvcir.2025.104677","url":null,"abstract":"<div><div>Structure-aware smoothing has proven to be a fundamental tool for a wide range of applications in computer vision and image processing tasks. In this paper, we propose an efficient and effective filter to smooth out texture. The proposed filter is implemented in three steps. (1) An image with a weak textured component is generated by applying a linear filter on the original image. (2) A windowed inherent variation is employed to discriminate textures and structures, and is used as a structure indicator. (3) Pixels on the flat and structure areas, guided by the structure indicator, are locally selected from the input and blurred image (from step (1)) through a local interpolation process. We have also presented detailed experimental analysis of the proposed technique including how the filter is related to self-guided filter and anisotropic filter. We demonstrate that the proposed technique outperforms several state-of-the-art methods in subjective and objective evaluation.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104677"},"PeriodicalIF":3.1,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145748551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VLMAR: Maritime scene anomaly detection via retrieval-augmented vision-language models VLMAR:基于检索增强视觉语言模型的海事场景异常检测
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-04 DOI: 10.1016/j.jvcir.2025.104669
Shen Wang , Chunsheng Yang , Chengtao Cai
Maritime anomaly detection is crucial for ensuring navigational safety and marine security. However, the global navigation safety is significantly challenged by the lack of comprehensive understanding of abnormal maritime ship behaviors. Drawing inspiration from the advanced reasoning capabilities of large language models, we introduce VLMAR, a novel vision-language framework that synergizes retrieval-augmented knowledge grounding and chain-of-thought reasoning to address these challenges. Our approach consists of two key innovations: (1) The VLMAR dataset is a large-scale multimodal repository containing 80,000 automatic identification system records, 11,500 synthetic aperture radar images, 5750 AIS text reports, and 27,000 behavioral narratives; (2) The VLMAR model architecture links real-time sensor data with maritime knowledge through dynamic retrieval and uses chain-of-thought fusion to interpret complex behaviors. Experimental results show that VLMAR achieves 94.77% Rank-1 accuracy in AIS retrieval and 89.10% accuracy in anomaly detection, significantly outperforming existing VLMs. Beyond performance, VLMAR reveals that aligning spatiotemporal AIS data with SAR imagery enables interpretable detection of hidden anomalies such as AIS spoofing and unauthorized route deviations, offering reliable explanations for safety-critical maritime decisions. This research establishes a new benchmark for maritime artificial intelligence systems, demonstrating how hybrid retrieval-generation paradigms can enhance situational awareness and support human-aligned decision-making.
海上异常检测是保障航行安全和海洋治安的重要手段。然而,由于缺乏对海上船舶异常行为的全面认识,全球航行安全面临重大挑战。从大型语言模型的高级推理能力中获得灵感,我们引入了VLMAR,这是一种新的视觉语言框架,它将检索增强知识基础和思维链推理协同起来,以解决这些挑战。我们的方法包括两个关键创新:(1)VLMAR数据集是一个大型多模式存储库,包含80,000条自动识别系统记录,11,500张合成孔径雷达图像,5750份AIS文本报告和27,000个行为叙述;(2) VLMAR模型架构通过动态检索将实时传感器数据与海事知识联系起来,并利用思维链融合来解释复杂行为。实验结果表明,VLMAR在AIS检索中的Rank-1准确率为94.77%,在异常检测中的准确率为89.10%,显著优于现有的VLMAR。除了性能之外,VLMAR还表明,将AIS时空数据与SAR图像相匹配,可以对AIS欺骗和未经授权的路线偏差等隐藏异常进行可解释的检测,为安全关键的海上决策提供可靠的解释。本研究为海事人工智能系统建立了一个新的基准,展示了混合检索生成范式如何增强态势感知并支持与人类一致的决策。
{"title":"VLMAR: Maritime scene anomaly detection via retrieval-augmented vision-language models","authors":"Shen Wang ,&nbsp;Chunsheng Yang ,&nbsp;Chengtao Cai","doi":"10.1016/j.jvcir.2025.104669","DOIUrl":"10.1016/j.jvcir.2025.104669","url":null,"abstract":"<div><div>Maritime anomaly detection is crucial for ensuring navigational safety and marine security. However, the global navigation safety is significantly challenged by the lack of comprehensive understanding of abnormal maritime ship behaviors. Drawing inspiration from the advanced reasoning capabilities of large language models, we introduce VLMAR, a novel vision-language framework that synergizes retrieval-augmented knowledge grounding and chain-of-thought reasoning to address these challenges. Our approach consists of two key innovations: (1) The VLMAR dataset is a large-scale multimodal repository containing 80,000 automatic identification system records, 11,500 synthetic aperture radar images, 5750 AIS text reports, and 27,000 behavioral narratives; (2) The VLMAR model architecture links real-time sensor data with maritime knowledge through dynamic retrieval and uses chain-of-thought fusion to interpret complex behaviors. Experimental results show that VLMAR achieves 94.77% Rank-1 accuracy in AIS retrieval and 89.10% accuracy in anomaly detection, significantly outperforming existing VLMs. Beyond performance, VLMAR reveals that aligning spatiotemporal AIS data with SAR imagery enables interpretable detection of hidden anomalies such as AIS spoofing and unauthorized route deviations, offering reliable explanations for safety-critical maritime decisions. This research establishes a new benchmark for maritime artificial intelligence systems, demonstrating how hybrid retrieval-generation paradigms can enhance situational awareness and support human-aligned decision-making.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104669"},"PeriodicalIF":3.1,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage tone mapping network based on attention mechanism for high dynamic range images 基于注意机制的高动态范围图像两阶段色调映射网络
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-03 DOI: 10.1016/j.jvcir.2025.104672
Mingtao Zhu , Hengyong Yu , Ying Chu
High dynamic range (HDR) imaging enhances visual realism by capturing a wide luminance range, but displaying HDR images on devices with limited dynamic range requires effective tone mapping operators (TMOs). We propose a novel two-stage tone mapping network (TSTMNet) based on attention mechanisms to address this challenge. The first stage utilizes adaptive luminance modulation blocks based on channel attention to dynamically adjust global luminance, achieving adaptive luminance modification. The second stage integrates local enhancement transformer blocks, leveraging self-attention from Transformers to enhance local details. This combination allows the TSTMNet to utilize the strengths of both CNNs and Transformers, overcoming their individual limitations in modeling long-range dependencies and preserving local details. Extensive experiments demonstrate that TSTMNet achieves competitive performance among recent methods in both quantitative metrics and qualitative visual quality, achieving superior dynamic range compression and detail preservation. Our method offers a robust and efficient solution to the tone mapping problem. The code is available to be downloaded at https://github.com/mtlaa/TSTMNet.
高动态范围(HDR)成像通过捕获宽亮度范围来增强视觉真实感,但在动态范围有限的设备上显示HDR图像需要有效的色调映射算子(TMOs)。我们提出了一种新的基于注意机制的两阶段音调映射网络(TSTMNet)来解决这一挑战。第一阶段利用基于信道关注的自适应亮度调制块动态调整全局亮度,实现自适应亮度修改;第二阶段集成本地增强变压器块,利用变压器的自关注来增强本地细节。这种组合允许TSTMNet利用cnn和transformer的优势,克服它们在建模远程依赖关系和保留局部细节方面的各自限制。大量的实验表明,TSTMNet在定量度量和定性视觉质量方面都取得了与现有方法相媲美的性能,实现了优越的动态范围压缩和细节保存。我们的方法为色调映射问题提供了一个鲁棒和高效的解决方案。代码可从https://github.com/mtlaa/TSTMNet下载。
{"title":"A two-stage tone mapping network based on attention mechanism for high dynamic range images","authors":"Mingtao Zhu ,&nbsp;Hengyong Yu ,&nbsp;Ying Chu","doi":"10.1016/j.jvcir.2025.104672","DOIUrl":"10.1016/j.jvcir.2025.104672","url":null,"abstract":"<div><div>High dynamic range (HDR) imaging enhances visual realism by capturing a wide luminance range, but displaying HDR images on devices with limited dynamic range requires effective tone mapping operators (TMOs). We propose a novel two-stage tone mapping network (TSTMNet) based on attention mechanisms to address this challenge. The first stage utilizes adaptive luminance modulation blocks based on channel attention to dynamically adjust global luminance, achieving adaptive luminance modification. The second stage integrates local enhancement transformer blocks, leveraging self-attention from Transformers to enhance local details. This combination allows the TSTMNet to utilize the strengths of both CNNs and Transformers, overcoming their individual limitations in modeling long-range dependencies and preserving local details. Extensive experiments demonstrate that TSTMNet achieves competitive performance among recent methods in both quantitative metrics and qualitative visual quality, achieving superior dynamic range compression and detail preservation. Our method offers a robust and efficient solution to the tone mapping problem. The code is available to be downloaded at <span><span>https://github.com/mtlaa/TSTMNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104672"},"PeriodicalIF":3.1,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep color constancy via a color shift aware conditional diffusion model 基于色移感知条件扩散模型的深颜色常数
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 DOI: 10.1016/j.jvcir.2025.104658
Haonan Su , Chenyu Wang , Haiyan Jin , Yuanlin Zhang , Bin Wang , Zhiyu Jiang
Color constancy methods often struggle to generalize across different lighting conditions due to the complexity of lighting scenarios. Inspired by the impressive performance of diffusion models in image synthesis and restoration tasks, we propose a color shift aware conditional diffusion model operating in LCH color space. This model leverages the generative capabilities of diffusion models to embed color channels for color reconstruction, ensuring robust performance and high accuracy in color restoration. Additionally, to achieve more accurate color reconstruction during the model’s generative process, we further designed a color shift estimation module. This module effectively guides the color generation of the conditional diffusion model by capturing global deviations in the color channels. Through extensive experiments, it is shown that the proposed method achieves the best performance in the accuracy of color reproduction compared to other alternatives for single- and multi-illuminant scenes, exhibiting efficient generalization capabilities.
由于照明场景的复杂性,色彩恒常性方法往往难以在不同的照明条件下进行概括。受扩散模型在图像合成和恢复任务中令人印象深刻的表现的启发,我们提出了一种在LCH颜色空间中操作的可感知色移的条件扩散模型。该模型利用扩散模型的生成能力嵌入颜色通道进行颜色重建,确保了颜色恢复的鲁棒性能和高精度。此外,为了在模型生成过程中实现更准确的颜色重建,我们进一步设计了颜色偏移估计模块。该模块通过捕获颜色通道中的全局偏差,有效地指导条件扩散模型的颜色生成。通过大量的实验表明,与其他方法相比,该方法在单光源和多光源场景的色彩再现精度方面表现最好,具有高效的泛化能力。
{"title":"Deep color constancy via a color shift aware conditional diffusion model","authors":"Haonan Su ,&nbsp;Chenyu Wang ,&nbsp;Haiyan Jin ,&nbsp;Yuanlin Zhang ,&nbsp;Bin Wang ,&nbsp;Zhiyu Jiang","doi":"10.1016/j.jvcir.2025.104658","DOIUrl":"10.1016/j.jvcir.2025.104658","url":null,"abstract":"<div><div>Color constancy methods often struggle to generalize across different lighting conditions due to the complexity of lighting scenarios. Inspired by the impressive performance of diffusion models in image synthesis and restoration tasks, we propose a color shift aware conditional diffusion model operating in LCH color space. This model leverages the generative capabilities of diffusion models to embed color channels for color reconstruction, ensuring robust performance and high accuracy in color restoration. Additionally, to achieve more accurate color reconstruction during the model’s generative process, we further designed a color shift estimation module. This module effectively guides the color generation of the conditional diffusion model by capturing global deviations in the color channels. Through extensive experiments, it is shown that the proposed method achieves the best performance in the accuracy of color reproduction compared to other alternatives for single- and multi-illuminant scenes, exhibiting efficient generalization capabilities.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104658"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting human-object interactions with image category-guided and query denoising 使用图像分类引导和查询去噪来检测人与物体的交互
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 DOI: 10.1016/j.jvcir.2025.104660
Jing Han , Hongyu Li , Xiaoying Wang , Xueqiang Lyu , Zangtai Cai , Yuzhong Chen
Existing Detection Transformer (DETR)-based algorithms in Human-Object Interaction (HOI) schemes learn instance-level human-object pair features to infer interaction behaviors, ignoring the influence of image macrostructure on the interaction behaviors. Meanwhile, the instability of the Hungarian matching process affects model convergence. Spurred by these concerns, this paper presents a novel HOI detection method featuring image category guidance and enhanced by query denoising. The proposed method constructs an image-level category query, which enhances the instance-level query based on the image-level contextual features to infer the interactions between humans and objects. Additionally, we introduce a query denoising training mechanism. Controlled noise is added to ground-truth queries, and the model is trained to reconstruct the original targets. This approach stabilizes matching and accelerates convergence. Furthermore, a branching shortcut is added to the triplet Hungarian matching process to stabilize the model’s training process. Experiments on the HICO-DET and V-COCO datasets demonstrate the superior performance of our method. Our method achieves accuracies of 37.71% on HICO-DET and 67.1% on V-COCO, while reducing training rounds from 500 to 25. The 95% reduction in training time results in significantly lower computational costs and energy consumption, enhancing the feasibility of practical deployment and accelerating experimental cycles. The code is available at https://github.com/lihy000/CADN-HOTR.
现有基于DETR的人-物交互(HOI)算法通过学习实例级人-物对特征来推断交互行为,忽略了图像宏观结构对交互行为的影响。同时,匈牙利匹配过程的不稳定性影响了模型的收敛性。在此基础上,提出了一种基于图像类别引导和查询去噪的HOI检测方法。该方法构建了图像级类别查询,该查询基于图像级上下文特征对实例级查询进行了改进,从而推断出人与对象之间的交互关系。此外,我们还引入了一种查询去噪训练机制。在地基真值查询中加入受控噪声,训练模型重建原始目标。该方法稳定了匹配,加快了收敛速度。此外,在三元组匈牙利匹配过程中增加了分支捷径,以稳定模型的训练过程。在HICO-DET和V-COCO数据集上的实验证明了该方法的优越性能。我们的方法在HICO-DET上的准确率为37.71%,在V-COCO上的准确率为67.1%,同时将训练轮数从500减少到25。训练时间减少95%,显著降低了计算成本和能耗,增强了实际部署的可行性,加快了实验周期。代码可在https://github.com/lihy000/CADN-HOTR上获得。
{"title":"Detecting human-object interactions with image category-guided and query denoising","authors":"Jing Han ,&nbsp;Hongyu Li ,&nbsp;Xiaoying Wang ,&nbsp;Xueqiang Lyu ,&nbsp;Zangtai Cai ,&nbsp;Yuzhong Chen","doi":"10.1016/j.jvcir.2025.104660","DOIUrl":"10.1016/j.jvcir.2025.104660","url":null,"abstract":"<div><div>Existing Detection Transformer (DETR)-based algorithms in Human-Object Interaction (HOI) schemes learn instance-level human-object pair features to infer interaction behaviors, ignoring the influence of image macrostructure on the interaction behaviors. Meanwhile, the instability of the Hungarian matching process affects model convergence. Spurred by these concerns, this paper presents a novel HOI detection method featuring image category guidance and enhanced by query denoising. The proposed method constructs an image-level category query, which enhances the instance-level query based on the image-level contextual features to infer the interactions between humans and objects. Additionally, we introduce a query denoising training mechanism. Controlled noise is added to ground-truth queries, and the model is trained to reconstruct the original targets. This approach stabilizes matching and accelerates convergence. Furthermore, a branching shortcut is added to the triplet Hungarian matching process to stabilize the model’s training process. Experiments on the HICO-DET and V-COCO datasets demonstrate the superior performance of our method. Our method achieves accuracies of 37.71% on HICO-DET and 67.1% on V-COCO, while reducing training rounds from 500 to 25. The 95% reduction in training time results in significantly lower computational costs and energy consumption, enhancing the feasibility of practical deployment and accelerating experimental cycles. The code is available at <span><span>https://github.com/lihy000/CADN-HOTR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104660"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145748546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating inter-frame prediction in Versatile Video Coding via deep learning-based mode selection 基于深度学习的模式选择加速通用视频编码中的帧间预测
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-29 DOI: 10.1016/j.jvcir.2025.104653
Xudong Zhang , Jing Chen , Huanqiang Zeng , Wenjie Xiang , Yuting Zuo
Compared to its predecessor HEVC, VVC utilizes the Quad-Tree plus Multitype Tree (QTMT) structure for partitioning Coding Units (CU) and integrates a wider range of inter-frame prediction modes within its inter-frame coding framework. The incorporation of these innovative techniques enables VVC to achieve a substantial bitrate reduction of approximately 40% compared to HEVC. However, this efficiency boost is accompanied by a more than tenfold increase in encoding time. To accelerate the inter-frame prediction mode selection process, a FPMSN (Fast Prediction Mode Selection Network)-based method focusing on encoding acceleration during the non-partitioning mode testing phase is proposed in this paper. First, the execution results of the affine mode are collected as neural network input features. Next, FPMSN is proposed to extract critical information from multi-dimensional data and output the probabilities for each mode. Finally, multiple trade-off strategies are implemented to early terminate low-probability mode candidates.
Experimental results show that, under the Random Access (RA) configuration, the proposed method achieves a reduction in encoding time ranging from 3.22% to 19.3%, with a corresponding BDBR increase of only 0.12% to 1.363%, surpassing the performance of state-of-the-art methods.
与其前身HEVC相比,VVC利用四叉树加多类型树(QTMT)结构来划分编码单元(CU),并在帧间编码框架内集成了更广泛的帧间预测模式。与HEVC相比,这些创新技术的结合使VVC的比特率大幅降低了约40%。然而,这种效率的提高伴随着编码时间的十倍以上的增加。为了加速帧间预测模式选择过程,提出了一种基于快速预测模式选择网络(FPMSN)的非分割模式测试阶段的编码加速方法。首先,收集仿射模式的执行结果作为神经网络的输入特征。其次,提出了FPMSN从多维数据中提取关键信息并输出每种模式的概率。最后,采用多种权衡策略,提前终止低概率模式候选者。实验结果表明,在随机存取(Random Access, RA)配置下,该方法的编码时间缩短了3.22% ~ 19.3%,BDBR仅提高了0.12% ~ 1.363%,超过了现有方法的性能。
{"title":"Accelerating inter-frame prediction in Versatile Video Coding via deep learning-based mode selection","authors":"Xudong Zhang ,&nbsp;Jing Chen ,&nbsp;Huanqiang Zeng ,&nbsp;Wenjie Xiang ,&nbsp;Yuting Zuo","doi":"10.1016/j.jvcir.2025.104653","DOIUrl":"10.1016/j.jvcir.2025.104653","url":null,"abstract":"<div><div>Compared to its predecessor HEVC, VVC utilizes the Quad-Tree plus Multitype Tree (QTMT) structure for partitioning Coding Units (CU) and integrates a wider range of inter-frame prediction modes within its inter-frame coding framework. The incorporation of these innovative techniques enables VVC to achieve a substantial bitrate reduction of approximately 40% compared to HEVC. However, this efficiency boost is accompanied by a more than tenfold increase in encoding time. To accelerate the inter-frame prediction mode selection process, a FPMSN (Fast Prediction Mode Selection Network)-based method focusing on encoding acceleration during the non-partitioning mode testing phase is proposed in this paper. First, the execution results of the affine mode are collected as neural network input features. Next, FPMSN is proposed to extract critical information from multi-dimensional data and output the probabilities for each mode. Finally, multiple trade-off strategies are implemented to early terminate low-probability mode candidates.</div><div>Experimental results show that, under the Random Access (RA) configuration, the proposed method achieves a reduction in encoding time ranging from 3.22% to 19.3%, with a corresponding BDBR increase of only 0.12% to 1.363%, surpassing the performance of state-of-the-art methods.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104653"},"PeriodicalIF":3.1,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAIRNet: Degradation-aware All-in-one Image Restoration Network with cross-channel feature interaction DAIRNet:具有跨通道特征交互的退化感知一体化图像恢复网络
IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-11-29 DOI: 10.1016/j.jvcir.2025.104659
Amit Monga , Hemkant Nehete , Tharun Kumar Reddy Bollu , Balasubramanian Raman
Image restoration is a fundamental task in computer vision that recovers clean images from degraded inputs. However, preserving fine-details and maintaining global structural consistency are challenging tasks. Traditional convolutional neural network (CNN)-based methods capture local features but fail to model long-range dependencies and often overlook small objects within similar backgrounds. Transformers, conversely, model global context effectively but lack local detail precision. To overcome these limitations, this paper proposes a Degradation-aware All-in-one Image Restoration Network that integrates both CNNs and Transformers. Beginning with a multiscale feature extraction block, the network captures diverse features across different resolutions, enhancing its ability to handle complex image structures. The features from CNN encoder are subsequently passed through a Transformer decoder. Notably, an interleaved Transformer is applied to the features extracted by the CNN encoder, fostering cross-interaction between features and helping to propagate similar texture signals across the entire feature space, making them more distinguishable. These improved features are then concatenated with the transformer decoder blocks with degradation-aware information as prompts, enriching the restoration process. On average, across various restoration tasks, DAIRNet surpasses the state-of-the-art PromptIR and AirNet methods by 0.76 dB and 1.62 dB, respectively. Specifically, it achieves gains of 1.74 dB in image deraining, 0.26 dB in high-noise level denoising, and 0.84 dB in image dehazing tasks as compared to PromptIR. Single-task benchmarks further confirm the model’s effectiveness and generalizability.
图像恢复是计算机视觉中的一项基本任务,它从退化的输入中恢复干净的图像。然而,保留细节和保持全球结构的一致性是一项具有挑战性的任务。传统的基于卷积神经网络(CNN)的方法捕获局部特征,但无法建立长期依赖关系,并且经常忽略相似背景中的小对象。相反,变形金刚可以有效地模拟全局上下文,但缺乏局部细节精度。为了克服这些限制,本文提出了一种集成cnn和transformer的退化感知一体化图像恢复网络。从多尺度特征提取块开始,该网络捕获不同分辨率的不同特征,增强了处理复杂图像结构的能力。CNN编码器的特征随后通过变压器解码器。值得注意的是,对CNN编码器提取的特征应用了交错变压器,促进了特征之间的交叉交互,并有助于在整个特征空间中传播相似的纹理信号,使它们更容易区分。然后将这些改进的特性与具有退化感知信息的变压器解码器块连接起来,作为提示,丰富恢复过程。平均而言,在各种恢复任务中,DAIRNet比最先进的PromptIR和AirNet方法分别高出0.76 dB和1.62 dB。具体来说,与PromptIR相比,它在图像去噪方面的增益为1.74 dB,在高噪声水平去噪方面的增益为0.26 dB,在图像去雾任务方面的增益为0.84 dB。单任务基准测试进一步证实了模型的有效性和可泛化性。
{"title":"DAIRNet: Degradation-aware All-in-one Image Restoration Network with cross-channel feature interaction","authors":"Amit Monga ,&nbsp;Hemkant Nehete ,&nbsp;Tharun Kumar Reddy Bollu ,&nbsp;Balasubramanian Raman","doi":"10.1016/j.jvcir.2025.104659","DOIUrl":"10.1016/j.jvcir.2025.104659","url":null,"abstract":"<div><div>Image restoration is a fundamental task in computer vision that recovers clean images from degraded inputs. However, preserving fine-details and maintaining global structural consistency are challenging tasks. Traditional convolutional neural network (CNN)-based methods capture local features but fail to model long-range dependencies and often overlook small objects within similar backgrounds. Transformers, conversely, model global context effectively but lack local detail precision. To overcome these limitations, this paper proposes a Degradation-aware All-in-one Image Restoration Network that integrates both CNNs and Transformers. Beginning with a multiscale feature extraction block, the network captures diverse features across different resolutions, enhancing its ability to handle complex image structures. The features from CNN encoder are subsequently passed through a Transformer decoder. Notably, an interleaved Transformer is applied to the features extracted by the CNN encoder, fostering cross-interaction between features and helping to propagate similar texture signals across the entire feature space, making them more distinguishable. These improved features are then concatenated with the transformer decoder blocks with degradation-aware information as prompts, enriching the restoration process. On average, across various restoration tasks, DAIRNet surpasses the state-of-the-art PromptIR and AirNet methods by 0.76 dB and 1.62 dB, respectively. Specifically, it achieves gains of 1.74 dB in image deraining, 0.26 dB in high-noise level denoising, and 0.84 dB in image dehazing tasks as compared to PromptIR. Single-task benchmarks further confirm the model’s effectiveness and generalizability.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"115 ","pages":"Article 104659"},"PeriodicalIF":3.1,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Visual Communication and Image Representation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1