首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Simultaneous Temperature Estimation and Nonuniformity Correction From Multiple Frames 从多个帧同时进行温度估算和非均匀性校正
Navot Oz;Omri Berman;Nir Sochen;David Mendlovic;Iftach Klapp
IR cameras are widely used for temperature measurements in various applications, including agriculture, medicine, and security. Low-cost IR cameras have the immense potential to replace expensive radiometric cameras in these applications; however, low-cost microbolometer-based IR cameras are prone to spatially variant nonuniformity and to drift in temperature measurements, which limit their usability in practical scenarios. To address these limitations, we propose a novel approach for simultaneous temperature estimation and nonuniformity correction (NUC) from multiple frames captured by low-cost microbolometer-based IR cameras. We leverage the camera’s physical image-acquisition model and incorporate it into a deep-learning architecture termed kernel prediction network (KPN), which enables us to combine multiple frames despite imperfect registration between them. We also propose a novel offset block that incorporates the ambient temperature into the model and enables us to estimate the offset of the camera, which is a key factor in temperature estimation. Our findings demonstrate that the number of frames has a significant impact on the accuracy of the temperature estimation and NUC. Moreover, introduction of the offset block results in significantly improved performance compared to vanilla KPN. The method was tested on real data collected by a low-cost IR camera mounted on an unmanned aerial vehicle, showing only a small average error of $0.27-0.54^{circ } C$ relative to costly scientific-grade radiometric cameras. Real data collected horizontally resulted in similar errors of $0.48-0.68^{circ } C$ . Our method provides an accurate and efficient solution for simultaneous temperature estimation and NUC, which has important implications for a wide range of practical applications.
红外热像仪广泛应用于农业、医疗和安防等领域的温度测量。然而,基于微测辐射热计的低成本红外热像仪容易受到空间变化不均匀性和温度测量漂移的影响,这限制了其在实际应用中的可用性。为了解决这些局限性,我们提出了一种新方法,利用基于微测辐射热计的低成本红外热像仪捕获的多帧图像,同时进行温度估计和非均匀性校正(NUC)。我们利用相机的物理图像采集模型,并将其纳入深度学习架构,称为核预测网络(KPN),这使我们能够结合多个帧,尽管它们之间的注册并不完美。我们还提出了一个新颖的偏移块,它将环境温度纳入模型,使我们能够估计相机的偏移量,这是温度估计的一个关键因素。我们的研究结果表明,帧数对温度估计和 NUC 的准确性有显著影响。此外,与 vanilla KPN 相比,偏移块的引入大大提高了性能。该方法在无人机上安装的低成本红外摄像机收集的真实数据上进行了测试,结果显示,相对于昂贵的科学级红外摄像机,该方法的平均误差仅为 0.27-0.54^{circ } C$。C$ 的平均误差。水平采集的真实数据也产生了类似的误差,为 0.48-0.68^{circ } C$ 。C$ .我们的方法为同时进行温度估算和 NUC 提供了精确而高效的解决方案,对广泛的实际应用具有重要意义。
{"title":"Simultaneous Temperature Estimation and Nonuniformity Correction From Multiple Frames","authors":"Navot Oz;Omri Berman;Nir Sochen;David Mendlovic;Iftach Klapp","doi":"10.1109/TIP.2024.3458861","DOIUrl":"10.1109/TIP.2024.3458861","url":null,"abstract":"IR cameras are widely used for temperature measurements in various applications, including agriculture, medicine, and security. Low-cost IR cameras have the immense potential to replace expensive radiometric cameras in these applications; however, low-cost microbolometer-based IR cameras are prone to spatially variant nonuniformity and to drift in temperature measurements, which limit their usability in practical scenarios. To address these limitations, we propose a novel approach for simultaneous temperature estimation and nonuniformity correction (NUC) from multiple frames captured by low-cost microbolometer-based IR cameras. We leverage the camera’s physical image-acquisition model and incorporate it into a deep-learning architecture termed kernel prediction network (KPN), which enables us to combine multiple frames despite imperfect registration between them. We also propose a novel offset block that incorporates the ambient temperature into the model and enables us to estimate the offset of the camera, which is a key factor in temperature estimation. Our findings demonstrate that the number of frames has a significant impact on the accuracy of the temperature estimation and NUC. Moreover, introduction of the offset block results in significantly improved performance compared to vanilla KPN. The method was tested on real data collected by a low-cost IR camera mounted on an unmanned aerial vehicle, showing only a small average error of \u0000<inline-formula> <tex-math>$0.27-0.54^{circ } C$ </tex-math></inline-formula>\u0000 relative to costly scientific-grade radiometric cameras. Real data collected horizontally resulted in similar errors of \u0000<inline-formula> <tex-math>$0.48-0.68^{circ } C$ </tex-math></inline-formula>\u0000. Our method provides an accurate and efficient solution for simultaneous temperature estimation and NUC, which has important implications for a wide range of practical applications.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5246-5259"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10682482","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Attention Regression Flow for Defect Detection 用于缺陷检测的交叉注意回归流程
Binhui Liu;Tianchu Guo;Bin Luo;Zhen Cui;Jian Yang
Defect detection from images is a crucial and challenging topic of industry scenarios due to the scarcity and unpredictability of anomalous samples. However, existing defect detection methods exhibit low detection performance when it comes to small-size defects. In this work, we propose a Cross-Attention Regression Flow (CARF) framework to model a compact distribution of normal visual patterns for separating outliers. To retain rich scale information of defects, we build an interactive cross-attention pattern flow module to jointly transform and align distributions of multi-layer features, which is beneficial for detecting small-size defects that may be annihilated in high-level features. To handle the complexity of multi-layer feature distributions, we introduce a layer-conditional autoregression module to improve the fitting capacity of data likelihoods on multi-layer features. By transforming the multi-layer feature distributions into a latent space, we can better characterize normal visual patterns. Extensive experiments on four public datasets and our collected industrial dataset demonstrate that the proposed CARF outperforms state-of-the-art methods, particularly in detecting small-size defects.
由于异常样本的稀缺性和不可预测性,从图像中进行缺陷检测是工业场景中一个至关重要且极具挑战性的课题。然而,现有的缺陷检测方法在检测小尺寸缺陷时表现出较低的检测性能。在这项工作中,我们提出了一种交叉注意力回归流(CARF)框架,对正常视觉模式的紧凑分布进行建模,以分离异常值。为了保留缺陷的丰富尺度信息,我们建立了一个交互式交叉注意力模式流模块,对多层特征的分布进行联合变换和对齐,这有利于检测可能被高层特征湮没的小尺寸缺陷。为了处理多层特征分布的复杂性,我们引入了层条件自回归模块,以提高多层特征数据似然的拟合能力。通过将多层特征分布转化为潜在空间,我们可以更好地描述正常的视觉模式。在四个公共数据集和我们收集的工业数据集上进行的广泛实验表明,所提出的 CARF 优于最先进的方法,尤其是在检测小尺寸缺陷方面。
{"title":"Cross-Attention Regression Flow for Defect Detection","authors":"Binhui Liu;Tianchu Guo;Bin Luo;Zhen Cui;Jian Yang","doi":"10.1109/TIP.2024.3457236","DOIUrl":"https://doi.org/10.1109/TIP.2024.3457236","url":null,"abstract":"Defect detection from images is a crucial and challenging topic of industry scenarios due to the scarcity and unpredictability of anomalous samples. However, existing defect detection methods exhibit low detection performance when it comes to small-size defects. In this work, we propose a Cross-Attention Regression Flow (CARF) framework to model a compact distribution of normal visual patterns for separating outliers. To retain rich scale information of defects, we build an interactive cross-attention pattern flow module to jointly transform and align distributions of multi-layer features, which is beneficial for detecting small-size defects that may be annihilated in high-level features. To handle the complexity of multi-layer feature distributions, we introduce a layer-conditional autoregression module to improve the fitting capacity of data likelihoods on multi-layer features. By transforming the multi-layer feature distributions into a latent space, we can better characterize normal visual patterns. Extensive experiments on four public datasets and our collected industrial dataset demonstrate that the proposed CARF outperforms state-of-the-art methods, particularly in detecting small-size defects.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5183-5193"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Log-Euclidean Metrics for SPD Matrix Learning 用于 SPD 矩阵学习的自适应对数欧氏度量法
Ziheng Chen;Yue Song;Tianyang Xu;Zhiwu Huang;Xiao-Jun Wu;Nicu Sebe
Symmetric Positive Definite (SPD) matrices have received wide attention in machine learning due to their intrinsic capacity to encode underlying structural correlation in data. Many successful Riemannian metrics have been proposed to reflect the non-Euclidean geometry of SPD manifolds. However, most existing metric tensors are fixed, which might lead to sub-optimal performance for SPD matrix learning, especially for deep SPD neural networks. To remedy this limitation, we leverage the commonly encountered pullback techniques and propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM). Compared with the previous Riemannian metrics, our metrics contain learnable parameters, which can better adapt to the complex dynamics of Riemannian neural networks with minor extra computations. We also present a complete theoretical analysis to support our ALEMs, including algebraic and Riemannian properties. The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks. The efficacy of our metrics is further showcased on a set of recently developed Riemannian building blocks, including Riemannian batch normalization, Riemannian Residual blocks, and Riemannian classifiers.
由于对称正定(SPD)矩阵具有编码数据中潜在结构相关性的内在能力,因此在机器学习领域受到广泛关注。为了反映 SPD 流形的非欧几里得几何特征,人们提出了许多成功的黎曼度量。然而,大多数现有的度量张量都是固定的,这可能会导致 SPD 矩阵学习的性能低于最优,尤其是对于深度 SPD 神经网络而言。为了弥补这一局限,我们利用常见的回拉技术,提出了自适应对数-欧几里得度量(ALEM),扩展了广泛使用的对数-欧几里得度量(LEM)。与之前的黎曼度量相比,我们的度量包含可学习参数,只需少量额外计算即可更好地适应黎曼神经网络的复杂动态。我们还提出了支持我们的 ALEM 的完整理论分析,包括代数和黎曼特性。实验和理论结果证明了所提出的度量在提高 SPD 神经网络性能方面的优势。我们的度量方法在最近开发的一系列黎曼构建模块(包括黎曼批归一化、黎曼残差模块和黎曼分类器)上进一步展示了其功效。
{"title":"Adaptive Log-Euclidean Metrics for SPD Matrix Learning","authors":"Ziheng Chen;Yue Song;Tianyang Xu;Zhiwu Huang;Xiao-Jun Wu;Nicu Sebe","doi":"10.1109/TIP.2024.3451930","DOIUrl":"10.1109/TIP.2024.3451930","url":null,"abstract":"Symmetric Positive Definite (SPD) matrices have received wide attention in machine learning due to their intrinsic capacity to encode underlying structural correlation in data. Many successful Riemannian metrics have been proposed to reflect the non-Euclidean geometry of SPD manifolds. However, most existing metric tensors are fixed, which might lead to sub-optimal performance for SPD matrix learning, especially for deep SPD neural networks. To remedy this limitation, we leverage the commonly encountered pullback techniques and propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM). Compared with the previous Riemannian metrics, our metrics contain learnable parameters, which can better adapt to the complex dynamics of Riemannian neural networks with minor extra computations. We also present a complete theoretical analysis to support our ALEMs, including algebraic and Riemannian properties. The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks. The efficacy of our metrics is further showcased on a set of recently developed Riemannian building blocks, including Riemannian batch normalization, Riemannian Residual blocks, and Riemannian classifiers.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5194-5205"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change Representation and Extraction in Stripes: Rethinking Unsupervised Hyperspectral Image Change Detection With an Untrained Network 条纹中的变化表示和提取:利用未训练网络反思无监督高光谱图像变化检测
Bin Yang;Yin Mao;Licheng Liu;Leyuan Fang;Xinxin Liu
Deep learning-based hyperspectral image (HSI) change detection (CD) approaches have a strong ability to leverage spectral-spatial-temporal information through automatic feature extraction, and currently dominate in the research field. However, their efficiency and universality are limited by the dependency on labeled data. Although the newly applied untrained networks can avoid the need for labeled data, their feature volatility from the simple difference space easily leads to inaccurate CD results. Inspired by the interesting finding that salient changes appear as bright “stripes” in a new feature space, we propose a novel unsupervised CD method that represents and models changes in stripes for HSIs (named as StripeCD), which integrates optimization modeling into an untrained network. The StripeCD method constructs a new feature space that represents change features in stripes and models them in a novel optimization manner. It consists of three main parts: 1) dual-branch untrained convolutional network, which is utilized to extract deep difference features from bitemporal HSIs and combined with a two-stage channel selection strategy to emphasize the important channels that contribute to CD. 2) multiscale forward-backward segmentation framework, which is proposed for salient change representation. It transforms deep difference features into a new feature space by exploiting the structure information of ground objects and associates salient changes with the stripe-shaped change component. 3) stripe-shaped change extraction model, which characterizes the global sparsity and local discontinuity of salient changes. It explores the intrinsic properties of deep difference features and constructs model-based constraints to better identify changed regions in a controllable manner. The proposed StripeCD method outperformed the state-of-the-art unsupervised CD approaches on three widely used datasets. In addition, the proposed StripeCD method indicates the potential for further investigation of untrained networks in facilitating reliable CD.
基于深度学习的高光谱图像(HSI)变化检测(CD)方法通过自动特征提取,具有很强的光谱-空间-时间信息利用能力,目前在研究领域占据主导地位。然而,由于对标记数据的依赖性,这些方法的效率和普遍性受到了限制。虽然新应用的非训练网络可以避免对标记数据的需求,但其来自简单差分空间的特征波动性容易导致不准确的 CD 结果。受突出变化在新特征空间中表现为明亮 "条纹 "这一有趣发现的启发,我们提出了一种新颖的无监督 CD 方法,该方法以条纹表示恒生指数的变化并对其进行建模(命名为 StripeCD),它将优化建模集成到了未经训练的网络中。StripeCD 方法构建了一个新的特征空间,用于表示条纹的变化特征,并以一种新颖的优化方式对其进行建模。它由三个主要部分组成:1)双分支非训练卷积网络,用于从位时 HSI 中提取深度差异特征,并与两阶段通道选择策略相结合,以强调对 CD 有贡献的重要通道。2) 多尺度前向-后向分割框架,用于突出变化表示。它通过利用地面物体的结构信息,将深层差异特征转化为新的特征空间,并将显著变化与条纹状变化分量关联起来。3)条纹状变化提取模型,该模型表征了显著变化的全局稀疏性和局部不连续性。它探索了深层差异特征的内在属性,并构建了基于模型的约束条件,从而以可控的方式更好地识别变化区域。在三个广泛使用的数据集上,所提出的 StripeCD 方法优于最先进的无监督 CD 方法。此外,所提出的 StripeCD 方法还表明,在促进可靠的 CD 方面,未训练网络具有进一步研究的潜力。
{"title":"Change Representation and Extraction in Stripes: Rethinking Unsupervised Hyperspectral Image Change Detection With an Untrained Network","authors":"Bin Yang;Yin Mao;Licheng Liu;Leyuan Fang;Xinxin Liu","doi":"10.1109/TIP.2024.3438100","DOIUrl":"10.1109/TIP.2024.3438100","url":null,"abstract":"Deep learning-based hyperspectral image (HSI) change detection (CD) approaches have a strong ability to leverage spectral-spatial-temporal information through automatic feature extraction, and currently dominate in the research field. However, their efficiency and universality are limited by the dependency on labeled data. Although the newly applied untrained networks can avoid the need for labeled data, their feature volatility from the simple difference space easily leads to inaccurate CD results. Inspired by the interesting finding that salient changes appear as bright “stripes” in a new feature space, we propose a novel unsupervised CD method that represents and models changes in stripes for HSIs (named as StripeCD), which integrates optimization modeling into an untrained network. The StripeCD method constructs a new feature space that represents change features in stripes and models them in a novel optimization manner. It consists of three main parts: 1) dual-branch untrained convolutional network, which is utilized to extract deep difference features from bitemporal HSIs and combined with a two-stage channel selection strategy to emphasize the important channels that contribute to CD. 2) multiscale forward-backward segmentation framework, which is proposed for salient change representation. It transforms deep difference features into a new feature space by exploiting the structure information of ground objects and associates salient changes with the stripe-shaped change component. 3) stripe-shaped change extraction model, which characterizes the global sparsity and local discontinuity of salient changes. It explores the intrinsic properties of deep difference features and constructs model-based constraints to better identify changed regions in a controllable manner. The proposed StripeCD method outperformed the state-of-the-art unsupervised CD approaches on three widely used datasets. In addition, the proposed StripeCD method indicates the potential for further investigation of untrained networks in facilitating reliable CD.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5098-5113"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Degradation Representation Learning for All-in-One Image Restoration 用于一体化图像修复的神经退化表征学习
Mingde Yao;Ruikang Xu;Yuanshen Guan;Jie Huang;Zhiwei Xiong
Existing methods have demonstrated effective performance on a single degradation type. In practical applications, however, the degradation is often unknown, and the mismatch between the model and the degradation will result in a severe performance drop. In this paper, we propose an all-in-one image restoration network that tackles multiple degradations. Due to the heterogeneous nature of different types of degradations, it is difficult to process multiple degradations in a single network. To this end, we propose to learn a neural degradation representation (NDR) that captures the underlying characteristics of various degradations. The learned NDR adaptively decomposes different types of degradations, similar to a neural dictionary that represents basic degradation components. Subsequently, we develop a degradation query module and a degradation injection module to effectively approximate and utilize the specific degradation based on NDR, enabling the all-in-one restoration ability for multiple degradations. Moreover, we propose a bidirectional optimization strategy to effectively drive NDR to learn the degradation representation by optimizing the degradation and restoration processes alternately. Comprehensive experiments on representative types of degradations (including noise, haze, rain, and downsampling) demonstrate the effectiveness and generalizability of our method. Code is available at https://github.com/mdyao/NDR-Restore.
现有的方法已经证明了对单一退化类型的有效性能。但在实际应用中,退化类型往往是未知的,模型与退化类型不匹配会导致性能严重下降。在本文中,我们提出了一种可处理多种退化的一体化图像修复网络。由于不同类型的降解具有异质性,因此很难在一个网络中处理多种降解。为此,我们建议学习一种神经退化表征(NDR),以捕捉各种退化的基本特征。学习到的 NDR 可以自适应地分解不同类型的降解,类似于表示基本降解成分的神经字典。随后,我们开发了降级查询模块和降级注入模块,以有效地近似和利用基于 NDR 的特定降级,从而实现对多种降级的一体化修复能力。此外,我们还提出了一种双向优化策略,通过交替优化降解和修复过程,有效驱动 NDR 学习降解表示。在具有代表性的降解类型(包括噪声、雾霾、雨水和降采样)上进行的综合实验证明了我们方法的有效性和普适性。代码见 https://github.com/mdyao/NDR-Restore。
{"title":"Neural Degradation Representation Learning for All-in-One Image Restoration","authors":"Mingde Yao;Ruikang Xu;Yuanshen Guan;Jie Huang;Zhiwei Xiong","doi":"10.1109/TIP.2024.3456583","DOIUrl":"10.1109/TIP.2024.3456583","url":null,"abstract":"Existing methods have demonstrated effective performance on a single degradation type. In practical applications, however, the degradation is often unknown, and the mismatch between the model and the degradation will result in a severe performance drop. In this paper, we propose an all-in-one image restoration network that tackles multiple degradations. Due to the heterogeneous nature of different types of degradations, it is difficult to process multiple degradations in a single network. To this end, we propose to learn a neural degradation representation (NDR) that captures the underlying characteristics of various degradations. The learned NDR adaptively decomposes different types of degradations, similar to a neural dictionary that represents basic degradation components. Subsequently, we develop a degradation query module and a degradation injection module to effectively approximate and utilize the specific degradation based on NDR, enabling the all-in-one restoration ability for multiple degradations. Moreover, we propose a bidirectional optimization strategy to effectively drive NDR to learn the degradation representation by optimizing the degradation and restoration processes alternately. Comprehensive experiments on representative types of degradations (including noise, haze, rain, and downsampling) demonstrate the effectiveness and generalizability of our method. Code is available at \u0000<uri>https://github.com/mdyao/NDR-Restore</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5408-5423"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image-Level Adaptive Adversarial Ranking for Person Re-Identification 用于人员再识别的图像级自适应对抗排序
Xi Yang;Huanling Liu;Nannan Wang;Xinbo Gao
The potential vulnerability of deep neural networks and the complexity of pedestrian images, greatly limits the application of person re-identification techniques in the field of smart security. Current attack methods often focus on generating carefully crafted adversarial samples or only disrupting the metric distances between targets and similar pedestrians. However, both aspects are crucial for evaluating the security of methods adapted for person re-identification tasks. For this reason, we propose an image-level adaptive adversarial ranking method that comprehensively considers two aspects to adapt to changes in pedestrians in the real world and effectively evaluate the robustness of models in adversarial environments. To generate more refined adversarial samples, our image representation enhancement module leverages channel-wise information entropy, assigning varying weights to different channels to produce images with richer information content, along with a generative adversarial network to create adversarial samples. Subsequently, for adaptive perturbation of ranking, the adaptive weight confusion ranking loss is presented to calculate the weights of distances between positive or negative samples and query samples. It endeavors to push positive samples away from query samples and bring negative samples closer, thereby interfering with the ranking of system. Notably, this method requires no additional hyperparameter tuning or extra data training, making it an adaptive attack strategy. Experimental results on large-scale datasets such as Market1501, CUHK03, and DukeMTMC demonstrate the effectiveness of our method in attacking ReID systems.
深度神经网络的潜在脆弱性和行人图像的复杂性,极大地限制了人员再识别技术在智能安防领域的应用。目前的攻击方法通常侧重于生成精心制作的对抗样本,或仅破坏目标与相似行人之间的度量距离。然而,这两个方面对于评估适用于人员再识别任务的方法的安全性至关重要。因此,我们提出了一种图像级自适应对抗排序方法,该方法综合考虑了两个方面,以适应真实世界中行人的变化,并有效评估模型在对抗环境中的鲁棒性。为了生成更精细的对抗样本,我们的图像表征增强模块利用信道信息熵,为不同信道分配不同权重,以生成具有更丰富信息内容的图像,同时利用对抗生成网络生成对抗样本。随后,为了对排序进行自适应扰动,提出了自适应权重混淆排序损失,以计算正样本或负样本与查询样本之间距离的权重。它致力于将正样本推离查询样本,将负样本拉近,从而干扰系统的排序。值得注意的是,这种方法不需要额外的超参数调整或额外的数据训练,因此是一种自适应攻击策略。在 Market1501、CUHK03 和 DukeMTMC 等大型数据集上的实验结果证明了我们的方法在攻击 ReID 系统方面的有效性。
{"title":"Image-Level Adaptive Adversarial Ranking for Person Re-Identification","authors":"Xi Yang;Huanling Liu;Nannan Wang;Xinbo Gao","doi":"10.1109/TIP.2024.3456000","DOIUrl":"10.1109/TIP.2024.3456000","url":null,"abstract":"The potential vulnerability of deep neural networks and the complexity of pedestrian images, greatly limits the application of person re-identification techniques in the field of smart security. Current attack methods often focus on generating carefully crafted adversarial samples or only disrupting the metric distances between targets and similar pedestrians. However, both aspects are crucial for evaluating the security of methods adapted for person re-identification tasks. For this reason, we propose an image-level adaptive adversarial ranking method that comprehensively considers two aspects to adapt to changes in pedestrians in the real world and effectively evaluate the robustness of models in adversarial environments. To generate more refined adversarial samples, our image representation enhancement module leverages channel-wise information entropy, assigning varying weights to different channels to produce images with richer information content, along with a generative adversarial network to create adversarial samples. Subsequently, for adaptive perturbation of ranking, the adaptive weight confusion ranking loss is presented to calculate the weights of distances between positive or negative samples and query samples. It endeavors to push positive samples away from query samples and bring negative samples closer, thereby interfering with the ranking of system. Notably, this method requires no additional hyperparameter tuning or extra data training, making it an adaptive attack strategy. Experimental results on large-scale datasets such as Market1501, CUHK03, and DukeMTMC demonstrate the effectiveness of our method in attacking ReID systems.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5172-5182"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disentangled Sample Guidance Learning for Unsupervised Person Re-Identification 用于无监督人员再识别的分离样本指导学习
Haoxuanye Ji;Le Wang;Sanping Zhou;Wei Tang;Gang Hua
Unsupervised person re-identification (Re-ID) is challenging due to the lack of ground truth labels. Most existing methods employ iterative clustering to generate pseudo labels for unlabeled training data to guide the learning process. However, how to select samples that are both associated with high-confidence pseudo labels and hard (discriminative) enough remains a critical problem. To address this issue, a disentangled sample guidance learning (DSGL) method is proposed for unsupervised Re-ID. The method consists of disentangled sample mining (DSM) and discriminative feature learning (DFL). DSM disentangles (unlabeled) person images into identity-relevant and identity-irrelevant factors, which are used to construct disentangled positive/negative groups that contain discriminative enough information. DFL incorporates the mined disentangled sample groups into model training by a surrogate disentangled learning loss and a disentangled second-order similarity regularization, to help the model better distinguish the characteristics of different persons. By using the DSGL training strategy, the mAP on Market-1501 and MSMT17 increases by 6.6% and 10.1% when applying the ResNet50 framework, and by 0.6% and 6.9% with the vision transformer (VIT) framework, respectively, validating the effectiveness of the DSGL method. Moreover, DSGL surpasses previous state-of-the-art methods by achieving higher Top-1 accuracy and mAP on the Market-1501, MSMT17, PersonX, and VeRi-776 datasets. The source code for this paper is available at https://github.com/jihaoxuanye/DiseSGL.
由于缺乏基本真实标签,无监督人员再识别(Re-ID)具有挑战性。大多数现有方法都采用迭代聚类方法,为无标签训练数据生成伪标签,以指导学习过程。然而,如何选择既与高置信度伪标签相关,又足够难(辨别)的样本仍然是一个关键问题。为了解决这个问题,我们提出了一种用于无监督 Re-ID 的分离样本指导学习(Disentangled sample guidance learning,DSGL)方法。该方法由分离样本挖掘(DSM)和判别特征学习(DFL)组成。DSM 将(未标记的)人物图像分解为身份相关因素和身份无关因素,用于构建包含足够判别信息的分解正/负组。DFL 将挖掘出的离散样本组纳入模型训练,通过替代离散学习损失和离散二阶相似性正则化来帮助模型更好地区分不同人物的特征。通过使用 DSGL 训练策略,Market-1501 和 MSMT17 的 mAP 在使用 ResNet50 框架时分别提高了 6.6% 和 10.1%,在使用视觉转换器(VIT)框架时分别提高了 0.6% 和 6.9%,验证了 DSGL 方法的有效性。此外,DSGL 在 Market-1501、MSMT17、PersonX 和 VeRi-776 数据集上实现了更高的 Top-1 准确率和 mAP,超越了之前的先进方法。本文的源代码见 https://github.com/jihaoxuanye/DiseSGL。
{"title":"Disentangled Sample Guidance Learning for Unsupervised Person Re-Identification","authors":"Haoxuanye Ji;Le Wang;Sanping Zhou;Wei Tang;Gang Hua","doi":"10.1109/TIP.2024.3456008","DOIUrl":"10.1109/TIP.2024.3456008","url":null,"abstract":"Unsupervised person re-identification (Re-ID) is challenging due to the lack of ground truth labels. Most existing methods employ iterative clustering to generate pseudo labels for unlabeled training data to guide the learning process. However, how to select samples that are both associated with high-confidence pseudo labels and hard (discriminative) enough remains a critical problem. To address this issue, a disentangled sample guidance learning (DSGL) method is proposed for unsupervised Re-ID. The method consists of disentangled sample mining (DSM) and discriminative feature learning (DFL). DSM disentangles (unlabeled) person images into identity-relevant and identity-irrelevant factors, which are used to construct disentangled positive/negative groups that contain discriminative enough information. DFL incorporates the mined disentangled sample groups into model training by a surrogate disentangled learning loss and a disentangled second-order similarity regularization, to help the model better distinguish the characteristics of different persons. By using the DSGL training strategy, the mAP on Market-1501 and MSMT17 increases by 6.6% and 10.1% when applying the ResNet50 framework, and by 0.6% and 6.9% with the vision transformer (VIT) framework, respectively, validating the effectiveness of the DSGL method. Moreover, DSGL surpasses previous state-of-the-art methods by achieving higher Top-1 accuracy and mAP on the Market-1501, MSMT17, PersonX, and VeRi-776 datasets. The source code for this paper is available at \u0000<uri>https://github.com/jihaoxuanye/DiseSGL</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5144-5158"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convex Hull Prediction for Adaptive Video Streaming by Recurrent Learning 通过循环学习进行自适应视频流的凸面体预测
Somdyuti Paul;Andrey Norkin;Alan C. Bovik
Adaptive video streaming relies on the construction of efficient bitrate ladders to deliver the best possible visual quality to viewers under bandwidth constraints. The traditional method of content dependent bitrate ladder selection requires a video shot to be pre-encoded with multiple encoding parameters to find the optimal operating points given by the convex hull of the resulting rate-quality curves. However, this pre-encoding step is equivalent to an exhaustive search process over the space of possible encoding parameters, which causes significant overhead in terms of both computation and time expenditure. To reduce this overhead, we propose a deep learning based method of content aware convex hull prediction. We employ a recurrent convolutional network (RCN) to implicitly analyze the spatiotemporal complexity of video shots in order to predict their convex hulls. A two-step transfer learning scheme is adopted to train our proposed RCN-Hull model, which ensures sufficient content diversity to analyze scene complexity, while also making it possible to capture the scene statistics of pristine source videos. Our experimental results reveal that our proposed model yields better approximations of the optimal convex hulls, and offers competitive time savings as compared to existing approaches. On average, the pre-encoding time was reduced by 53.8% by our method, while the average Bjøntegaard delta bitrate (BD-rate) of the predicted convex hulls against ground truth was 0.26%, and the mean absolute deviation of the BD-rate distribution was 0.57%.
自适应视频流依赖于构建高效的比特率梯形图,以便在带宽限制条件下为观众提供最佳视觉质量。与内容相关的比特率梯形选择的传统方法要求用多个编码参数对视频镜头进行预编码,以找到由所得到的速率-质量曲线的凸壳给出的最佳工作点。然而,这一预编码步骤等同于在可能的编码参数空间中进行穷举搜索,在计算和时间支出方面都会造成巨大的开销。为了减少这一开销,我们提出了一种基于深度学习的内容感知凸壳预测方法。我们采用递归卷积网络(RCN)来隐式分析视频镜头的时空复杂性,从而预测其凸壳。我们提出的 RCN-Hull 模型采用两步迁移学习方案进行训练,既能确保足够的内容多样性以分析场景复杂性,又能捕捉原始源视频的场景统计数据。实验结果表明,与现有方法相比,我们提出的模型能更好地逼近最优凸壳,并能节省大量时间。通过我们的方法,预编码时间平均减少了 53.8%,而预测的凸壳相对于地面实况的平均比特率(BD-rate)为 0.26%,BD-rate 分布的平均绝对偏差为 0.57%。
{"title":"Convex Hull Prediction for Adaptive Video Streaming by Recurrent Learning","authors":"Somdyuti Paul;Andrey Norkin;Alan C. Bovik","doi":"10.1109/TIP.2024.3455989","DOIUrl":"10.1109/TIP.2024.3455989","url":null,"abstract":"Adaptive video streaming relies on the construction of efficient bitrate ladders to deliver the best possible visual quality to viewers under bandwidth constraints. The traditional method of content dependent bitrate ladder selection requires a video shot to be pre-encoded with multiple encoding parameters to find the optimal operating points given by the convex hull of the resulting rate-quality curves. However, this pre-encoding step is equivalent to an exhaustive search process over the space of possible encoding parameters, which causes significant overhead in terms of both computation and time expenditure. To reduce this overhead, we propose a deep learning based method of content aware convex hull prediction. We employ a recurrent convolutional network (RCN) to implicitly analyze the spatiotemporal complexity of video shots in order to predict their convex hulls. A two-step transfer learning scheme is adopted to train our proposed RCN-Hull model, which ensures sufficient content diversity to analyze scene complexity, while also making it possible to capture the scene statistics of pristine source videos. Our experimental results reveal that our proposed model yields better approximations of the optimal convex hulls, and offers competitive time savings as compared to existing approaches. On average, the pre-encoding time was reduced by 53.8% by our method, while the average Bjøntegaard delta bitrate (BD-rate) of the predicted convex hulls against ground truth was 0.26%, and the mean absolute deviation of the BD-rate distribution was 0.57%.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5114-5128"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UniParser: Multi-Human Parsing With Unified Correlation Representation Learning UniParser:利用统一相关表示学习进行多人解析
Jiaming Chu;Lei Jin;Yinglei Teng;Jianshu Li;Yunchao Wei;Zheng Wang;Junliang Xing;Shuicheng Yan;Jian Zhao
Multi-human parsing is an image segmentation task necessitating both instance-level and fine-grained category-level information. However, prior research has typically processed these two types of information through distinct branch types and output formats, leading to inefficient and redundant frameworks. This paper introduces UniParser, which integrates instance-level and category-level representations in three key aspects: 1) we propose a unified correlation representation learning approach, allowing our network to learn instance and category features within the cosine space; 2) we unify the form of outputs of each modules as pixel-level results while supervising instance and category features using a homogeneous label accompanied by an auxiliary loss; and 3) we design a joint optimization procedure to fuse instance and category representations. By unifying instance-level and category-level output, UniParser circumvents manually designed post-processing techniques and surpasses state-of-the-art methods, achieving 49.3% AP on MHPv2.0 and 60.4% AP on CIHP. We have released our source code, pretrained models, and demos to facilitate future studies on https://github.com/cjm-sfw/Uniparser.
多人解析是一项图像分割任务,需要实例级和细粒度类别级信息。然而,之前的研究通常通过不同的分支类型和输出格式来处理这两类信息,导致框架效率低下且冗余。本文介绍了 UniParser,它从三个关键方面整合了实例级和类别级表征:1)我们提出了一种统一的相关表示学习方法,允许我们的网络在余弦空间内学习实例和类别特征;2)我们将各模块的输出形式统一为像素级结果,同时使用同质标签和辅助损失来监督实例和类别特征;3)我们设计了一种联合优化程序来融合实例和类别表示。通过统一实例级和类别级输出,UniParser 避开了人工设计的后处理技术,并超越了最先进的方法,在 MHPv2.0 上实现了 49.3% 的 AP,在 CIHP 上实现了 60.4% 的 AP。我们已经发布了源代码、预训练模型和演示,以促进未来在 https://github.com/cjm-sfw/Uniparser 上的研究。
{"title":"UniParser: Multi-Human Parsing With Unified Correlation Representation Learning","authors":"Jiaming Chu;Lei Jin;Yinglei Teng;Jianshu Li;Yunchao Wei;Zheng Wang;Junliang Xing;Shuicheng Yan;Jian Zhao","doi":"10.1109/TIP.2024.3456004","DOIUrl":"10.1109/TIP.2024.3456004","url":null,"abstract":"Multi-human parsing is an image segmentation task necessitating both instance-level and fine-grained category-level information. However, prior research has typically processed these two types of information through distinct branch types and output formats, leading to inefficient and redundant frameworks. This paper introduces UniParser, which integrates instance-level and category-level representations in three key aspects: 1) we propose a unified correlation representation learning approach, allowing our network to learn instance and category features within the cosine space; 2) we unify the form of outputs of each modules as pixel-level results while supervising instance and category features using a homogeneous label accompanied by an auxiliary loss; and 3) we design a joint optimization procedure to fuse instance and category representations. By unifying instance-level and category-level output, UniParser circumvents manually designed post-processing techniques and surpasses state-of-the-art methods, achieving 49.3% AP on MHPv2.0 and 60.4% AP on CIHP. We have released our source code, pretrained models, and demos to facilitate future studies on \u0000<uri>https://github.com/cjm-sfw/Uniparser</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5159-5171"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target Before Shooting: Accurate Anomaly Detection and Localization Under One Millisecond via Cascade Patch Retrieval 射击前的目标:通过级联补丁检索实现一毫秒内的精确异常检测和定位
Hanxi Li;Jianfei Hu;Bo Li;Hao Chen;Yongbin Zheng;Chunhua Shen
In this work, by re-examining the “matching” nature of Anomaly Detection (AD), we propose a novel AD framework that simultaneously enjoys new records of AD accuracy and dramatically high running speed. In this framework, the anomaly detection problem is solved via a cascade patch retrieval procedure that retrieves the nearest neighbors for each test image patch in a coarse-to-fine fashion. Given a test sample, the top-K most similar training images are first selected based on a robust histogram matching process. Secondly, the nearest neighbor of each test patch is retrieved over the similar geometrical locations on those “most similar images”, by using a carefully trained local metric. Finally, the anomaly score of each test image patch is calculated based on the distance to its “nearest neighbor” and the “non-background” probability. The proposed method is termed “Cascade Patch Retrieval” (CPR) in this work. Different from the previous patch-matching-based AD algorithms, CPR selects proper “targets” (reference images and patches) before “shooting” (patch-matching). On the well-acknowledged MVTec AD, BTAD and MVTec-3D AD datasets, the proposed algorithm consistently outperforms all the comparing SOTA methods by remarkable margins, measured by various AD metrics. Furthermore, CPR is extremely efficient. It runs at the speed of 113 FPS with the standard setting while its simplified version only requires less than 1 ms to process an image at the cost of a trivial accuracy drop. The code of CPR is available at https://github.com/flyinghu123/CPR.
在这项工作中,通过重新审视异常检测(AD)的 "匹配 "本质,我们提出了一种新颖的异常检测框架,该框架同时享有新的异常检测准确率记录和显著的高速运行速度。在该框架中,异常检测问题通过级联补丁检索程序来解决,该程序以从粗到细的方式检索每个测试图像补丁的最近邻。给定一个测试样本后,首先根据稳健的直方图匹配过程选出前 K 个最相似的训练图像。其次,通过使用精心训练的局部度量,在这些 "最相似图像 "的相似几何位置上检索每个测试图斑的近邻。最后,根据与 "最近邻居 "的距离和 "非背景 "概率计算出每个测试图像补丁的异常得分。在这项工作中,所提出的方法被称为 "级联补丁检索"(CPR)。与以往基于补丁匹配的 AD 算法不同,CPR 在 "拍摄"(补丁匹配)之前先选择合适的 "目标"(参考图像和补丁)。在广受认可的 MVTec AD、BTAD 和 MVTec-3D AD 数据集上,根据各种 AD 指标衡量,所提出的算法始终优于所有同类 SOTA 方法。此外,CPR 非常高效。在标准设置下,它的运行速度为 113 FPS,而其简化版本处理一幅图像只需不到 1 毫秒,但精度却有微小的下降。CPR 的代码见 https://github.com/flyinghu123/CPR。
{"title":"Target Before Shooting: Accurate Anomaly Detection and Localization Under One Millisecond via Cascade Patch Retrieval","authors":"Hanxi Li;Jianfei Hu;Bo Li;Hao Chen;Yongbin Zheng;Chunhua Shen","doi":"10.1109/TIP.2024.3448263","DOIUrl":"10.1109/TIP.2024.3448263","url":null,"abstract":"In this work, by re-examining the “matching” nature of Anomaly Detection (AD), we propose a novel AD framework that simultaneously enjoys new records of AD accuracy and dramatically high running speed. In this framework, the anomaly detection problem is solved via a cascade patch retrieval procedure that retrieves the nearest neighbors for each test image patch in a coarse-to-fine fashion. Given a test sample, the top-K most similar training images are first selected based on a robust histogram matching process. Secondly, the nearest neighbor of each test patch is retrieved over the similar geometrical locations on those “most similar images”, by using a carefully trained local metric. Finally, the anomaly score of each test image patch is calculated based on the distance to its “nearest neighbor” and the “non-background” probability. The proposed method is termed “Cascade Patch Retrieval” (CPR) in this work. Different from the previous patch-matching-based AD algorithms, CPR selects proper “targets” (reference images and patches) before “shooting” (patch-matching). On the well-acknowledged MVTec AD, BTAD and MVTec-3D AD datasets, the proposed algorithm consistently outperforms all the comparing SOTA methods by remarkable margins, measured by various AD metrics. Furthermore, CPR is extremely efficient. It runs at the speed of 113 FPS with the standard setting while its simplified version only requires less than 1 ms to process an image at the cost of a trivial accuracy drop. The code of CPR is available at \u0000<uri>https://github.com/flyinghu123/CPR</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5606-5621"},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142170998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1