首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
Stimulating Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling. 通过自适应嵌入和集合实现图像去噪的刺激扩散模型
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432812
Tong Li, Hansen Feng, Lizhi Wang, Lin Zhu, Zhiwei Xiong, Hua Huang

Image denoising is a fundamental problem in computational photography, where achieving high perception with low distortion is highly demanding. Current methods either struggle with perceptual quality or suffer from significant distortion. Recently, the emerging diffusion model has achieved state-of-the-art performance in various tasks and demonstrates great potential for image denoising. However, stimulating diffusion models for image denoising is not straightforward and requires solving several critical problems. For one thing, the input inconsistency hinders the connection between diffusion models and image denoising. For another, the content inconsistency between the generated image and the desired denoised image introduces distortion. To tackle these problems, we present a novel strategy called the Diffusion Model for Image Denoising (DMID) by understanding and rethinking the diffusion model from a denoising perspective. Our DMID strategy includes an adaptive embedding method that embeds the noisy image into a pre-trained unconditional diffusion model and an adaptive ensembling method that reduces distortion in the denoised image. Our DMID strategy achieves state-of-the-art performance on both distortion-based and perception-based metrics, for both Gaussian and real-world image denoising. The code is available at https://github.com/Li-Tong-621/DMID.

图像去噪是计算摄影中的一个基本问题,实现高感知、低失真的要求很高。目前的方法要么难以达到感知质量,要么失真严重。最近,新兴的扩散模型在各种任务中取得了最先进的性能,在图像去噪方面展现出巨大的潜力。然而,激励扩散模型用于图像去噪并不简单,需要解决几个关键问题。首先,输入的不一致性阻碍了扩散模型与图像去噪之间的联系。另外,生成的图像与所需去噪图像之间的内容不一致也会带来失真。为了解决这些问题,我们从去噪的角度来理解和重新思考扩散模型,提出了一种名为 "图像去噪扩散模型(DMID)"的新策略。我们的 DMID 策略包括一种自适应嵌入方法和一种自适应集合方法,前者可将噪声图像嵌入预先训练好的无条件扩散模型,后者可减少去噪图像的失真。我们的 DMID 策略在基于失真和基于感知的指标上都达到了最先进的性能,适用于高斯图像和真实世界图像的去噪。代码见 https://github.com/Li-Tong-621/DMID。
{"title":"Stimulating Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling.","authors":"Tong Li, Hansen Feng, Lizhi Wang, Lin Zhu, Zhiwei Xiong, Hua Huang","doi":"10.1109/TPAMI.2024.3432812","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3432812","url":null,"abstract":"<p><p>Image denoising is a fundamental problem in computational photography, where achieving high perception with low distortion is highly demanding. Current methods either struggle with perceptual quality or suffer from significant distortion. Recently, the emerging diffusion model has achieved state-of-the-art performance in various tasks and demonstrates great potential for image denoising. However, stimulating diffusion models for image denoising is not straightforward and requires solving several critical problems. For one thing, the input inconsistency hinders the connection between diffusion models and image denoising. For another, the content inconsistency between the generated image and the desired denoised image introduces distortion. To tackle these problems, we present a novel strategy called the Diffusion Model for Image Denoising (DMID) by understanding and rethinking the diffusion model from a denoising perspective. Our DMID strategy includes an adaptive embedding method that embeds the noisy image into a pre-trained unconditional diffusion model and an adaptive ensembling method that reduces distortion in the denoised image. Our DMID strategy achieves state-of-the-art performance on both distortion-based and perception-based metrics, for both Gaussian and real-world image denoising. The code is available at https://github.com/Li-Tong-621/DMID.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DifFace: Blind Face Restoration with Diffused Error Contraction. DifFace:利用漫反射误差收缩进行盲人面容修复
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432651
Zongsheng Yue, Chen Change Loy

While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with L1 loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations. Code and model are available at https://github.com/zsyOAOA/DifFace.

虽然基于深度学习的盲目人脸修复方法取得了前所未有的成功,但它们仍然存在两大局限性。首先,当训练数据出现复杂退化时,大多数方法的性能都会下降。其次,这些方法需要多重约束,如保真度、感知和对抗损失,这就需要费力地进行超参数调整,以稳定和平衡它们的影响。在这项工作中,我们提出了一种名为 "DifFace "的新方法,该方法无需复杂的损失设计,就能更从容地应对看不见的复杂退化。我们方法的关键在于建立一个从观测到的低质量(LQ)图像到其高质量(HQ)对应图像的后验分布。具体来说,我们设计了一个从低质量图像到预训练扩散模型中间状态的过渡分布,然后通过递归应用预训练扩散模型,从中间状态逐步传输到高质量目标。过渡分布只依赖于在一些合成数据上进行 L1 损失训练的恢复骨干,这就避免了现有方法中繁琐的训练过程。此外,过渡分布还能缩小恢复骨干的误差,从而使我们的方法对未知退化具有更强的鲁棒性。综合实验表明,DifFace 优于目前最先进的方法,尤其是在严重退化的情况下。代码和模型见 https://github.com/zsyOAOA/DifFace。
{"title":"DifFace: Blind Face Restoration with Diffused Error Contraction.","authors":"Zongsheng Yue, Chen Change Loy","doi":"10.1109/TPAMI.2024.3432651","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3432651","url":null,"abstract":"<p><p>While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with L<sub>1</sub> loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations. Code and model are available at https://github.com/zsyOAOA/DifFace.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Cut via Hierarchical Sequence/Set Model for Efficient Mixed-Integer Programming. 通过分层序列/集合模型学习切割,实现高效混合整数编程。
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432716
Jie Wang, Zhihai Wang, Xijun Li, Yufei Kuang, Zhihao Shi, Fangzhou Zhu, Mingxuan Yuan, Jia Zeng, Yongdong Zhang, Feng Wu

Cutting planes (cuts) play an important role in solving mixed-integer linear programs (MILPs), which formulate many important real-world applications. Cut selection heavily depends on (P1) which cuts to prefer and (P2) how many cuts to select. Although modern MILP solvers tackle (P1)-(P2) by human-designed heuristics, machine learning carries the potential to learn more effective heuristics. However, many existing learning-based methods learn which cuts to prefer, neglecting the importance of learning how many cuts to select. Moreover, we observe that (P3) what order of selected cuts to prefer significantly impacts the efficiency of MILP solvers as well. To address these challenges, we propose a novel hierarchical sequence/set model (HEM) to learn cut selection policies. Specifically, HEM is a bi-level model: (1) a higher-level module that learns how many cuts to select, (2) and a lower-level module-that formulates the cut selection as a sequence/set to sequence learning problem-to learn policies selecting an ordered subset with the cardinality determined by the higher-level module. To the best of our knowledge, HEM is the first data-driven methodology that well tackles (P1)-(P3) simultaneously. Experiments demonstrate that HEM significantly improves the efficiency of solving MILPs on eleven challenging MILP benchmarks, including two Huawei's real problems.

切平面(切口)在求解混合整数线性方程组(MILPs)中发挥着重要作用,MILPs 在现实世界中有着许多重要的应用。切面选择在很大程度上取决于 (P1) 选择哪些切面和 (P2) 选择多少切面。虽然现代 MILP 求解器通过人类设计的启发式方法来解决 (P1)-(P2) 问题,但机器学习有可能学习到更有效的启发式方法。然而,现有的许多基于学习的方法都是学习选择哪些切点,而忽略了学习选择多少切点的重要性。此外,我们还观察到,(P3)选择哪种切分顺序对 MILP 求解器的效率也有很大影响。为了应对这些挑战,我们提出了一种新颖的分层序列/集合模型(HEM)来学习切分选择策略。具体来说,HEM 是一个双层模型:(1) 高层模块用于学习选择多少个切口;(2) 低层模块将切口选择表述为序列/集合到序列学习问题,用于学习选择有序子集的策略,该子集的有序性由高层模块决定。据我们所知,HEM 是第一种数据驱动的方法,能同时很好地解决(P1)-(P3)问题。实验证明,在 11 个具有挑战性的 MILP 基准(包括两个华为的实际问题)上,HEM 显著提高了 MILP 的求解效率。
{"title":"Learning to Cut via Hierarchical Sequence/Set Model for Efficient Mixed-Integer Programming.","authors":"Jie Wang, Zhihai Wang, Xijun Li, Yufei Kuang, Zhihao Shi, Fangzhou Zhu, Mingxuan Yuan, Jia Zeng, Yongdong Zhang, Feng Wu","doi":"10.1109/TPAMI.2024.3432716","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3432716","url":null,"abstract":"<p><p>Cutting planes (cuts) play an important role in solving mixed-integer linear programs (MILPs), which formulate many important real-world applications. Cut selection heavily depends on (P1) which cuts to prefer and (P2) how many cuts to select. Although modern MILP solvers tackle (P1)-(P2) by human-designed heuristics, machine learning carries the potential to learn more effective heuristics. However, many existing learning-based methods learn which cuts to prefer, neglecting the importance of learning how many cuts to select. Moreover, we observe that (P3) what order of selected cuts to prefer significantly impacts the efficiency of MILP solvers as well. To address these challenges, we propose a novel hierarchical sequence/set model (HEM) to learn cut selection policies. Specifically, HEM is a bi-level model: (1) a higher-level module that learns how many cuts to select, (2) and a lower-level module-that formulates the cut selection as a sequence/set to sequence learning problem-to learn policies selecting an ordered subset with the cardinality determined by the higher-level module. To the best of our knowledge, HEM is the first data-driven methodology that well tackles (P1)-(P3) simultaneously. Experiments demonstrate that HEM significantly improves the efficiency of solving MILPs on eleven challenging MILP benchmarks, including two Huawei's real problems.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inductive State-Relabeling Adversarial Active Learning with Heuristic Clique Rescaling. 采用启发式克利克重缩放的归纳式状态重标注逆向主动学习。
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432099
Beichen Zhang, Liang Li, Shuhui Wang, Shaofei Cai, Zheng-Jun Zha, Qi Tian, Qingming Huang

Active learning (AL) is to design label-efficient algorithms by labeling the most representative samples. It reduces annotation cost and attracts increasing attention from the community. However, previous AL methods suffer from the inadequacy of annotations and unreliable uncertainty estimation. Moreover, we find that they ignore the intra-diversity of selected samples, which leads to sampling redundancy. In view of these challenges, we propose an inductive state-relabeling adversarial AL model (ISRA) that consists of a unified representation generator, an inductive state-relabeling discriminator, and a heuristic clique rescaling module. The generator introduces contrastive learning to leverage unlabeled samples for self-supervised training, where the mutual information is utilized to improve the representation quality for AL selection. Then, we design an inductive uncertainty indicator to learn the state score from labeled data and relabel unlabeled data with different importance for better discrimination of instructive samples. To solve the problem of sampling redundancy, the heuristic clique rescaling module measures the intra-diversity of candidate samples and recurrently rescales them to select the most informative samples. The experiments conducted on eight datasets and two imbalanced scenarios show that our model outperforms the previous state-of-the-art AL methods. As an extension on the cross-modal AL task, we apply ISRA to the image captioning and it also achieves superior performance.

主动学习(AL)是通过标注最具代表性的样本来设计标签效率高的算法。它降低了标注成本,受到越来越多的关注。然而,以往的主动学习方法存在注释不足和不确定性估计不可靠的问题。此外,我们还发现这些方法忽略了所选样本的内部多样性,从而导致了采样冗余。鉴于这些挑战,我们提出了一种归纳式状态标注对抗 AL 模型(ISRA),它由一个统一的表示生成器、一个归纳式状态标注判别器和一个启发式聚类重缩模块组成。表示生成器引入了对比学习,利用未标记样本进行自我监督训练,利用互信息来提高 AL 选择的表示质量。然后,我们设计了一个归纳式不确定性指标,从已标注数据中学习状态得分,并对未标注数据进行不同重要性的重新标注,以更好地辨别指导性样本。为解决采样冗余问题,启发式聚类重定标模块会测量候选样本的内部多样性,并对其进行循环重定标,以选择信息量最大的样本。在八个数据集和两个不平衡场景下进行的实验表明,我们的模型优于之前最先进的 AL 方法。作为跨模态 AL 任务的扩展,我们将 ISRA 应用于图像字幕,同样取得了优异的性能。
{"title":"Inductive State-Relabeling Adversarial Active Learning with Heuristic Clique Rescaling.","authors":"Beichen Zhang, Liang Li, Shuhui Wang, Shaofei Cai, Zheng-Jun Zha, Qi Tian, Qingming Huang","doi":"10.1109/TPAMI.2024.3432099","DOIUrl":"10.1109/TPAMI.2024.3432099","url":null,"abstract":"<p><p>Active learning (AL) is to design label-efficient algorithms by labeling the most representative samples. It reduces annotation cost and attracts increasing attention from the community. However, previous AL methods suffer from the inadequacy of annotations and unreliable uncertainty estimation. Moreover, we find that they ignore the intra-diversity of selected samples, which leads to sampling redundancy. In view of these challenges, we propose an inductive state-relabeling adversarial AL model (ISRA) that consists of a unified representation generator, an inductive state-relabeling discriminator, and a heuristic clique rescaling module. The generator introduces contrastive learning to leverage unlabeled samples for self-supervised training, where the mutual information is utilized to improve the representation quality for AL selection. Then, we design an inductive uncertainty indicator to learn the state score from labeled data and relabel unlabeled data with different importance for better discrimination of instructive samples. To solve the problem of sampling redundancy, the heuristic clique rescaling module measures the intra-diversity of candidate samples and recurrently rescales them to select the most informative samples. The experiments conducted on eight datasets and two imbalanced scenarios show that our model outperforms the previous state-of-the-art AL methods. As an extension on the cross-modal AL task, we apply ISRA to the image captioning and it also achieves superior performance.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent Semantic and Disentangled Attention. 潜在语义和分离注意力
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432631
Jen-Tzung Chien, Yu-Han Huang

Sequential learning using transformer has achieved state-of-the-art performance in natural language tasks and many others. The key to this success is the multi-head self attention which encodes and gathers the features from individual tokens of an input sequence. The mapping or decoding is performed to produce an output sequence via cross attention. There are threefold weaknesses by using such an attention framework. First, since the attention would mix up the features of different tokens in input and output sequences, it is likely that redundant information exists in sequence data representation. Second, the patterns of attention weights among different heads tend to be similar. The model capacity is bounded. Third, the robustness in an encoder-decoder network against the model uncertainty is disregarded. To handle these weaknesses, this paper presents a Bayesian semantic and disentangled mask attention to learn latent disentanglement in multi-head attention where the redundant features in transformer are compensated with the latent topic information. The attention weights are filtered by a mask which is optimized through semantic clustering. This attention mechanism is implemented according to Bayesian learning for clustered disentanglement. The experiments on machine translation and speech recognition show the merit of Bayesian clustered disentanglement for mask attention.

在自然语言任务和其他许多任务中,使用变换器的序列学习取得了最先进的性能。成功的关键在于多头自我注意,它可以编码和收集输入序列中单个标记的特征。通过交叉注意进行映射或解码,以产生输出序列。使用这种注意力框架有三个方面的弱点。首先,由于注意力会混合输入和输出序列中不同标记的特征,因此序列数据表示中很可能存在冗余信息。其次,不同头部的注意力权重模式往往相似。模型容量是有边界的。第三,编码器-解码器网络对模型不确定性的鲁棒性被忽视。为了解决这些问题,本文提出了一种贝叶斯语义和分解掩码注意力,以学习多头注意力中的潜在分解,其中转换器中的冗余特征由潜在主题信息补偿。通过语义聚类优化的掩码对注意力权重进行过滤。这种注意力机制是根据贝叶斯学习来实现聚类解纠缠的。在机器翻译和语音识别方面的实验表明了贝叶斯聚类解缠掩码注意力的优点。
{"title":"Latent Semantic and Disentangled Attention.","authors":"Jen-Tzung Chien, Yu-Han Huang","doi":"10.1109/TPAMI.2024.3432631","DOIUrl":"10.1109/TPAMI.2024.3432631","url":null,"abstract":"<p><p>Sequential learning using transformer has achieved state-of-the-art performance in natural language tasks and many others. The key to this success is the multi-head self attention which encodes and gathers the features from individual tokens of an input sequence. The mapping or decoding is performed to produce an output sequence via cross attention. There are threefold weaknesses by using such an attention framework. First, since the attention would mix up the features of different tokens in input and output sequences, it is likely that redundant information exists in sequence data representation. Second, the patterns of attention weights among different heads tend to be similar. The model capacity is bounded. Third, the robustness in an encoder-decoder network against the model uncertainty is disregarded. To handle these weaknesses, this paper presents a Bayesian semantic and disentangled mask attention to learn latent disentanglement in multi-head attention where the redundant features in transformer are compensated with the latent topic information. The attention weights are filtered by a mask which is optimized through semantic clustering. This attention mechanism is implemented according to Bayesian learning for clustered disentanglement. The experiments on machine translation and speech recognition show the merit of Bayesian clustered disentanglement for mask attention.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HDF-Net: Capturing Homogeny Difference Features to Localize the Tampered Image. HDF-Net:捕捉同质差异特征,定位被篡改的图像。
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432551
Ruidong Han, Xiaofeng Wang, Ningning Bai, Yihang Wang, Jianpeng Hou, Jianru Xue

Modern image editing software enables anyone to alter the content of an image to deceive the public, which can pose a security hazard to personal privacy and public safety. The detection and localization of image tampering is becoming an urgent issue to be addressed. We have revealed that the tampered region exhibits homogenous differences (the changes in metadata organization form and organization structure of the image) from the real region after manipulations such as splicing, copy-move, and removal. Therefore, we propose a novel end-to-end network named HDF-Net to extract these homogeny difference features for precise localization of tampering artifacts. The HDF-Net is composed of RGB and SRM dual-stream networks, including three complementary modules, namely the suspicious tampering-artifact prominent (STP) module, the fine tampering-artifact salient (FTS) module, and the tampering-artifact edge refined (TER) module. We utilize the fully attentional block (FLA) to enhance the characterization ability of homogeny difference features extracted by each module and preserve the specifics of tampering artifacts. These modules are gradually merged according to the strategy of "coarse-fine-finer", which significantly improves the localization accuracy and edge refinement. Extensive experiments demonstrate that HDF-Net performs better than state-of-the-art tampering localization models on five benchmarks, achieving satisfactory generalization and robustness. Code can be found at https://github.com/ruidonghan/HDF-Net/.

现代图像编辑软件使任何人都能篡改图像内容以欺骗公众,这可能对个人隐私和公共安全构成安全隐患。图像篡改的检测和定位已成为亟待解决的问题。我们发现,经过拼接、复制移动和删除等操作后,被篡改区域与真实区域呈现出同质差异(图像元数据组织形式和组织结构的变化)。因此,我们提出了一种名为 HDF-Net 的新型端到端网络来提取这些同质差异特征,从而精确定位篡改伪影。HDF 网络由 RGB 和 SRM 双流网络组成,包括三个互补模块,即可疑篡改伪影突出(STP)模块、精细篡改伪影突出(FTS)模块和篡改伪影边缘细化(TER)模块。我们利用全注意力区块(FLA)来增强各模块提取的同质性差异特征的表征能力,并保留篡改伪影的特异性。这些模块按照 "粗-细-精 "的策略逐步合并,从而显著提高了定位精度和边缘细化能力。广泛的实验证明,在五个基准测试中,HDF-Net 的表现优于最先进的篡改定位模型,达到了令人满意的泛化和鲁棒性。代码见 https://github.com/ruidonghan/HDF-Net/。
{"title":"HDF-Net: Capturing Homogeny Difference Features to Localize the Tampered Image.","authors":"Ruidong Han, Xiaofeng Wang, Ningning Bai, Yihang Wang, Jianpeng Hou, Jianru Xue","doi":"10.1109/TPAMI.2024.3432551","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3432551","url":null,"abstract":"<p><p>Modern image editing software enables anyone to alter the content of an image to deceive the public, which can pose a security hazard to personal privacy and public safety. The detection and localization of image tampering is becoming an urgent issue to be addressed. We have revealed that the tampered region exhibits homogenous differences (the changes in metadata organization form and organization structure of the image) from the real region after manipulations such as splicing, copy-move, and removal. Therefore, we propose a novel end-to-end network named HDF-Net to extract these homogeny difference features for precise localization of tampering artifacts. The HDF-Net is composed of RGB and SRM dual-stream networks, including three complementary modules, namely the suspicious tampering-artifact prominent (STP) module, the fine tampering-artifact salient (FTS) module, and the tampering-artifact edge refined (TER) module. We utilize the fully attentional block (FLA) to enhance the characterization ability of homogeny difference features extracted by each module and preserve the specifics of tampering artifacts. These modules are gradually merged according to the strategy of \"coarse-fine-finer\", which significantly improves the localization accuracy and edge refinement. Extensive experiments demonstrate that HDF-Net performs better than state-of-the-art tampering localization models on five benchmarks, achieving satisfactory generalization and robustness. Code can be found at https://github.com/ruidonghan/HDF-Net/.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Self-Supervised Semantic Segmentation: Achieving End-to-End Segmentation. 反思自我监督语义分割:实现端到端分割。
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432326
Yue Liu, Jun Zeng, Xingzhen Tao, Gang Fang

The challenge of semantic segmentation with scarce pixel-level annotations has induced many self-supervised works, however most of which essentially train an image encoder or a segmentation head that produces finer dense representations, and when performing segmentation inference they need to resort to supervised linear classifiers or traditional clustering. Segmentation by dataset-level clustering not only deviates the real-time and end-to-end inference practice, but also escalates the problem from segmenting per image to clustering all pixels at once, which results in downgraded performance. To remedy this issue, we propose a novel self-supervised semantic segmentation training and inferring paradigm where inferring is performed in an end-to-end manner. Specifically, based on our observations in probing dense representation by image-level self-supervised ViT, i.e. semantic inconsistency between patches and poor semantic quality in non-salient regions, we propose prototype-image alignment and global-local alignment with attention map constraint to train a tailored Transformer Decoder with learnable prototypes and utilize adaptive prototypes for segmentation inference per image. Extensive experiments under fully unsupervised semantic segmentation settings demonstrate the superior performance and the generalizability of our proposed method. The code is available at: https://github.com/yliu1229/AlignSeg.

利用稀缺的像素级注释进行语义分割所面临的挑战引发了许多自监督工作,但其中大多数工作基本上都是训练图像编码器或分割头,以生成更精细的密集表示,而在执行分割推理时,它们需要借助监督线性分类器或传统聚类。通过数据集级聚类进行分割不仅偏离了实时和端到端的推理实践,而且将问题从每幅图像的分割升级为一次性对所有像素进行聚类,从而导致性能下降。为了解决这个问题,我们提出了一种新颖的自监督语义分割训练和推理范例,在这种范例中,推理是以端到端的方式进行的。具体来说,根据我们在图像级自监督 ViT 的密集表征探测中观察到的问题,即斑块之间的语义不一致和非倾斜区域的语义质量差,我们提出了原型-图像对齐和全局-局部对齐的注意图约束,用可学习的原型来训练定制的变换器解码器,并利用自适应原型进行每幅图像的分割推理。在完全无监督的语义分割设置下进行的大量实验证明了我们提出的方法具有卓越的性能和通用性。代码见:https://github.com/yliu1229/AlignSeg。
{"title":"Rethinking Self-Supervised Semantic Segmentation: Achieving End-to-End Segmentation.","authors":"Yue Liu, Jun Zeng, Xingzhen Tao, Gang Fang","doi":"10.1109/TPAMI.2024.3432326","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3432326","url":null,"abstract":"<p><p>The challenge of semantic segmentation with scarce pixel-level annotations has induced many self-supervised works, however most of which essentially train an image encoder or a segmentation head that produces finer dense representations, and when performing segmentation inference they need to resort to supervised linear classifiers or traditional clustering. Segmentation by dataset-level clustering not only deviates the real-time and end-to-end inference practice, but also escalates the problem from segmenting per image to clustering all pixels at once, which results in downgraded performance. To remedy this issue, we propose a novel self-supervised semantic segmentation training and inferring paradigm where inferring is performed in an end-to-end manner. Specifically, based on our observations in probing dense representation by image-level self-supervised ViT, i.e. semantic inconsistency between patches and poor semantic quality in non-salient regions, we propose prototype-image alignment and global-local alignment with attention map constraint to train a tailored Transformer Decoder with learnable prototypes and utilize adaptive prototypes for segmentation inference per image. Extensive experiments under fully unsupervised semantic segmentation settings demonstrate the superior performance and the generalizability of our proposed method. The code is available at: https://github.com/yliu1229/AlignSeg.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FEditNet++: Few-Shot Editing of Latent Semantics in GAN Spaces with Correlated Attribute Disentanglement. FEditNet++:在具有相关属性解缠的 GAN 空间中对潜在语义进行快速编辑
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432529
Ran Yi, Teng Hu, Mengfei Xia, Yizhe Tang, Yong-Jin Liu

Generative Adversarial Networks have achieved significant advancements in generating and editing high-resolution images. However, most methods suffer from either requiring extensive labeled datasets or strong prior knowledge. It is also challenging for them to disentangle correlated attributes with few-shot data. In this paper, we propose FEditNet++, a GAN-based approach to explore latent semantics. It aims to enable attribute editing with limited labeled data and disentangle the correlated attributes. We propose a layer-wise feature contrastive objective, which takes into consideration content consistency and facilitates the invariance of the unrelated attributes before and after editing. Furthermore, we harness the knowledge from the pretrained discriminative model to prevent overfitting. In particular, to solve the entanglement problem between the correlated attributes from data and semantic latent correlation, we extend our model to jointly optimize multiple attributes and propose a novel decoupling loss and cross-assessment loss to disentangle them from both latent and image space. We further propose a novel-attribute disentanglement strategy to enable editing of novel attributes with unknown entanglements. Finally, we extend our model to accurately edit the fine-grained attributes. Qualitative and quantitative assessments demonstrate that our method outperforms state-of-the-art approaches across various datasets, including CelebA-HQ, RaFD, Danbooru2018 and LSUN Church.

生成对抗网络在生成和编辑高分辨率图像方面取得了重大进展。然而,大多数方法都需要大量标记数据集或强大的先验知识。此外,这些方法还很难用少量数据来区分相关属性。在本文中,我们提出了一种基于 GAN 的潜在语义探索方法 FEditNet++。它旨在利用有限的标注数据进行属性编辑,并分离相关属性。我们提出了一种分层特征对比目标,它考虑到了内容的一致性,并促进了无关属性在编辑前后的不变性。此外,我们还利用预训练判别模型的知识来防止过拟合。特别是,为了解决数据相关属性与语义潜在相关性之间的纠缠问题,我们扩展了模型以联合优化多个属性,并提出了一种新颖的解耦损失和交叉评估损失,以将它们从潜在空间和图像空间中分离出来。我们进一步提出了一种新颖的属性解缠策略,以便编辑具有未知纠缠的新颖属性。最后,我们扩展了模型,以准确编辑细粒度属性。定性和定量评估表明,我们的方法在各种数据集(包括 CelebA-HQ、RaFD、Danbooru2018 和 LSUN Church)上的表现优于最先进的方法。
{"title":"FEditNet++: Few-Shot Editing of Latent Semantics in GAN Spaces with Correlated Attribute Disentanglement.","authors":"Ran Yi, Teng Hu, Mengfei Xia, Yizhe Tang, Yong-Jin Liu","doi":"10.1109/TPAMI.2024.3432529","DOIUrl":"10.1109/TPAMI.2024.3432529","url":null,"abstract":"<p><p>Generative Adversarial Networks have achieved significant advancements in generating and editing high-resolution images. However, most methods suffer from either requiring extensive labeled datasets or strong prior knowledge. It is also challenging for them to disentangle correlated attributes with few-shot data. In this paper, we propose FEditNet++, a GAN-based approach to explore latent semantics. It aims to enable attribute editing with limited labeled data and disentangle the correlated attributes. We propose a layer-wise feature contrastive objective, which takes into consideration content consistency and facilitates the invariance of the unrelated attributes before and after editing. Furthermore, we harness the knowledge from the pretrained discriminative model to prevent overfitting. In particular, to solve the entanglement problem between the correlated attributes from data and semantic latent correlation, we extend our model to jointly optimize multiple attributes and propose a novel decoupling loss and cross-assessment loss to disentangle them from both latent and image space. We further propose a novel-attribute disentanglement strategy to enable editing of novel attributes with unknown entanglements. Finally, we extend our model to accurately edit the fine-grained attributes. Qualitative and quantitative assessments demonstrate that our method outperforms state-of-the-art approaches across various datasets, including CelebA-HQ, RaFD, Danbooru2018 and LSUN Church.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Flexible Semantic Guided Model for Single Image Enhancement and Restoration. 为单一图像增强和修复建立灵活的语义指导模型
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432308
Yuhui Wu, Guoqing Wang, Shaochong Liu, Yang Yang, Wei Li, Xiongxin Tang, Shuhang Gu, Chongyi Li, Heng Tao Shen

Low-light image enhancement (LLIE) investigates how to improve the brightness of an image captured in illumination-insufficient environments. The majority of existing methods enhance low-light images in a global and uniform manner, without taking into account the semantic information of different regions. Consequently, a network may easily deviate from the original color of local regions. To address this issue, we propose a semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model. We concentrate on incorporating semantic knowledge from three key aspects: a semantic-aware embedding module that adaptively integrates semantic priors in feature representation space, a semantic-guided color histogram loss that preserves color consistency of various instances, and a semantic-guided adversarial loss that produces more natural textures by semantic priors. Our SKF is appealing in acting as a general framework in the LLIE task. We further present a refined framework SKF++ with two new techniques: (a) Extra convolutional branch for intra-class illumination and color recovery through extracting local information and (b) Equalization-based histogram transformation for contrast enhancement and high dynamic range adjustment. Extensive experiments on various benchmarks of LLIE task and other image processing tasks show that models equipped with the SKF/SKF++ significantly outperform the baselines and our SKF/SKF++ generalizes to different models and scenes well. Besides, the potential benefits of our method in face detection and semantic segmentation in low-light conditions are discussed. The code and pre-trained models have been publicly available at https://github.com/langmanbusi/Semantic-Aware-Low-Light-Image-Enhancement.

低照度图像增强(LLIE)研究如何提高在照度不足的环境中拍摄的图像的亮度。现有的大多数低照度图像增强方法都是全局统一增强,没有考虑不同区域的语义信息。因此,网络很容易偏离局部区域的原始颜色。为了解决这个问题,我们提出了一种语义感知知识引导框架(SKF),它可以帮助弱光增强模型学习语义分割模型中丰富多样的先验信息。我们专注于从三个关键方面纳入语义知识:一个语义感知嵌入模块,可在特征表示空间中自适应地整合语义先验;一个语义指导的色彩直方图损失,可保持各种实例的色彩一致性;以及一个语义指导的对抗损失,可通过语义先验生成更自然的纹理。我们的 SKF 在作为 LLIE 任务的通用框架方面很有吸引力。我们进一步提出了改进框架 SKF++,其中包含两项新技术:(a) 额外卷积分支,通过提取局部信息实现类内光照和色彩恢复;(b) 基于均衡的直方图变换,用于对比度增强和高动态范围调整。在 LLIE 任务和其他图像处理任务的各种基准上进行的广泛实验表明,配备 SKF/SKF++ 的模型明显优于基线,而且我们的 SKF/SKF++ 还能很好地泛化到不同的模型和场景中。此外,还讨论了我们的方法在弱光条件下进行人脸检测和语义分割的潜在优势。代码和预训练模型已在 https://github.com/langmanbusi/Semantic-Aware-Low-Light-Image-Enhancement 公开。
{"title":"Towards a Flexible Semantic Guided Model for Single Image Enhancement and Restoration.","authors":"Yuhui Wu, Guoqing Wang, Shaochong Liu, Yang Yang, Wei Li, Xiongxin Tang, Shuhang Gu, Chongyi Li, Heng Tao Shen","doi":"10.1109/TPAMI.2024.3432308","DOIUrl":"10.1109/TPAMI.2024.3432308","url":null,"abstract":"<p><p>Low-light image enhancement (LLIE) investigates how to improve the brightness of an image captured in illumination-insufficient environments. The majority of existing methods enhance low-light images in a global and uniform manner, without taking into account the semantic information of different regions. Consequently, a network may easily deviate from the original color of local regions. To address this issue, we propose a semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model. We concentrate on incorporating semantic knowledge from three key aspects: a semantic-aware embedding module that adaptively integrates semantic priors in feature representation space, a semantic-guided color histogram loss that preserves color consistency of various instances, and a semantic-guided adversarial loss that produces more natural textures by semantic priors. Our SKF is appealing in acting as a general framework in the LLIE task. We further present a refined framework SKF++ with two new techniques: (a) Extra convolutional branch for intra-class illumination and color recovery through extracting local information and (b) Equalization-based histogram transformation for contrast enhancement and high dynamic range adjustment. Extensive experiments on various benchmarks of LLIE task and other image processing tasks show that models equipped with the SKF/SKF++ significantly outperform the baselines and our SKF/SKF++ generalizes to different models and scenes well. Besides, the potential benefits of our method in face detection and semantic segmentation in low-light conditions are discussed. The code and pre-trained models have been publicly available at https://github.com/langmanbusi/Semantic-Aware-Low-Light-Image-Enhancement.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unpaired Image-text Matching via Multimodal Aligned Conceptual Knowledge. 通过多模态对齐概念知识进行非配对图像-文本匹配。
Pub Date : 2024-07-23 DOI: 10.1109/TPAMI.2024.3432552
Yan Huang, Yuming Wang, Yunan Zeng, Junshi Huang, Zhenhua Chai, Liang Wang

Recently, the accuracy of image-text matching has been greatly improved by multimodal pretrained models, all of which use millions or billions of paired images and texts for supervised model learning. Different from them, human brains can well match images with texts using their stored multimodal knowledge. Inspired by that, this paper studies a new scenario as unpaired image-text matching, in which paired images and texts are assumed to be unavailable during model learning. To deal with it, we accordingly propose a simple yet effective method namely Multimodal Aligned Conceptual Knowledge (MACK). First, we collect a set of words and their related image regions from publicly available datasets, and compute prototypical region representations to obtain pretrained general knowledge. To make the obtained knowledge better suit for certain datasets, we refine it using unpaired images and texts in a self-supervised learning manner to obtain fine-tuned domain knowledge. Then, to match given images with texts based on the knowledge, we represent parsed words in the texts by prototypical region representations, and compute region-word similarity scores. At last, the scores are aggregated based on bidirectional similarity pooling into an image-text similarity score, which can be directly used for unpaired image-text matching. The proposed MACK is complementary with existing models, which can be easily extended as a re-ranking method to substantially improve their performance of zero-shot and cross-dataset image-text matching.

最近,多模态预训练模型极大地提高了图像与文本匹配的准确性,所有这些模型都使用数百万或数十亿的配对图像和文本进行监督模型学习。与之不同的是,人脑可以利用其存储的多模态知识很好地匹配图像和文本。受此启发,本文研究了一种新的情况,即无配对图像-文本匹配,在这种情况下,假定在模型学习过程中没有配对图像和文本。为此,我们提出了一种简单而有效的方法,即多模态对齐概念知识(MACK)。首先,我们从公开数据集中收集一组词语及其相关图像区域,并计算原型区域表征,从而获得预训练的一般知识。为了使获得的知识更适合特定的数据集,我们使用未配对的图像和文本,以自我监督学习的方式对其进行完善,从而获得微调的领域知识。然后,为了根据知识匹配给定图像和文本,我们用原型区域表示法来表示文本中的解析词,并计算区域-词相似度得分。最后,基于双向相似性池将分数汇总为图像-文本相似性分数,该分数可直接用于无配对图像-文本匹配。所提出的 MACK 与现有模型具有互补性,可作为一种重新排序方法轻松扩展,从而大幅提高其在零镜头和跨数据集图像-文本匹配中的性能。
{"title":"Unpaired Image-text Matching via Multimodal Aligned Conceptual Knowledge.","authors":"Yan Huang, Yuming Wang, Yunan Zeng, Junshi Huang, Zhenhua Chai, Liang Wang","doi":"10.1109/TPAMI.2024.3432552","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3432552","url":null,"abstract":"<p><p>Recently, the accuracy of image-text matching has been greatly improved by multimodal pretrained models, all of which use millions or billions of paired images and texts for supervised model learning. Different from them, human brains can well match images with texts using their stored multimodal knowledge. Inspired by that, this paper studies a new scenario as unpaired image-text matching, in which paired images and texts are assumed to be unavailable during model learning. To deal with it, we accordingly propose a simple yet effective method namely Multimodal Aligned Conceptual Knowledge (MACK). First, we collect a set of words and their related image regions from publicly available datasets, and compute prototypical region representations to obtain pretrained general knowledge. To make the obtained knowledge better suit for certain datasets, we refine it using unpaired images and texts in a self-supervised learning manner to obtain fine-tuned domain knowledge. Then, to match given images with texts based on the knowledge, we represent parsed words in the texts by prototypical region representations, and compute region-word similarity scores. At last, the scores are aggregated based on bidirectional similarity pooling into an image-text similarity score, which can be directly used for unpaired image-text matching. The proposed MACK is complementary with existing models, which can be easily extended as a re-ranking method to substantially improve their performance of zero-shot and cross-dataset image-text matching.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1