首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
Enhancing Adversarial Transferability with Cost-efficient Landscape Flattening. 通过成本效益高的景观平坦化提高对抗性转移能力。
IF 18.6 Pub Date : 2026-02-13 DOI: 10.1109/TPAMI.2026.3664421
Zhipeng Wei, Jingjing Chen, Feng Han, Yue Yu, Yu-Gang Jiang

The transferability of adversarial examples across different models has drawn considerable attention recently, particularly in targeted transferability. Prior research has empirically shown that optimizing adversarial perturbations at neighboring points with the highest loss value improves transferability. While effective, such a method requires multiple iterations to reach the local maxima and disregards the local minima of the input loss landscape. In this paper, we theoretically show that enhancing adversarial transferability is attainable by flattening the input loss landscape. This is accomplished through the perturbation optimization at both local maxima and minima. Moreover, we propose the Cost-efficient LandscapE Flattening (CLEF) attack to consider local maxima and minima around current inputs in a cost-efficient way to flatten the loss landscape and improve adversarial transferability. Specifically, we reuse the gradients of the previous attack step to assist current inputs in reaching local maxima, and employ probabilistic modeling to learn the distributional representations of perturbations that assist current inputs in reaching local minima. This probabilistic modeling can be pre-trained on dozens of images from other domains, enabling us to directly sample this type of perturbation from the pre-trained distribution when attacking. Experimental results demonstrate that integrating local maxima and minima into targeted transferable attacks can significantly flatten the loss landscape of the crafted adversarial examples, resulting in improved adversarial transferability.

对抗实例在不同模型之间的可转移性最近引起了相当大的关注,特别是在目标可转移性方面。先前的研究经验表明,在损失值最高的相邻点上优化对抗性扰动可以提高可转移性。这种方法虽然有效,但需要多次迭代才能达到局部最大值,而忽略了输入损失情况的局部最小值。在本文中,我们从理论上证明了通过平坦化输入损失景观可以提高对抗可转移性。这是通过在局部最大值和最小值处进行扰动优化来实现的。此外,我们提出了成本效益景观平坦化(CLEF)攻击,以成本效益的方式考虑电流输入周围的局部最大值和最小值,以平坦损失景观并提高对抗性可转移性。具体来说,我们重用前一个攻击步骤的梯度来帮助电流输入达到局部最大值,并使用概率建模来学习帮助电流输入达到局部最小值的扰动的分布表示。这种概率模型可以在来自其他领域的数十个图像上进行预训练,使我们能够在攻击时直接从预训练的分布中采样这种类型的扰动。实验结果表明,将局部最大值和最小值整合到目标可转移攻击中可以显着平坦精心制作的对抗示例的损失情况,从而提高对抗可转移性。
{"title":"Enhancing Adversarial Transferability with Cost-efficient Landscape Flattening.","authors":"Zhipeng Wei, Jingjing Chen, Feng Han, Yue Yu, Yu-Gang Jiang","doi":"10.1109/TPAMI.2026.3664421","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3664421","url":null,"abstract":"<p><p>The transferability of adversarial examples across different models has drawn considerable attention recently, particularly in targeted transferability. Prior research has empirically shown that optimizing adversarial perturbations at neighboring points with the highest loss value improves transferability. While effective, such a method requires multiple iterations to reach the local maxima and disregards the local minima of the input loss landscape. In this paper, we theoretically show that enhancing adversarial transferability is attainable by flattening the input loss landscape. This is accomplished through the perturbation optimization at both local maxima and minima. Moreover, we propose the Cost-efficient LandscapE Flattening (CLEF) attack to consider local maxima and minima around current inputs in a cost-efficient way to flatten the loss landscape and improve adversarial transferability. Specifically, we reuse the gradients of the previous attack step to assist current inputs in reaching local maxima, and employ probabilistic modeling to learn the distributional representations of perturbations that assist current inputs in reaching local minima. This probabilistic modeling can be pre-trained on dozens of images from other domains, enabling us to directly sample this type of perturbation from the pre-trained distribution when attacking. Experimental results demonstrate that integrating local maxima and minima into targeted transferable attacks can significantly flatten the loss landscape of the crafted adversarial examples, resulting in improved adversarial transferability.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146196067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distribution-to-Points Matching for Image Text Retrieval. 图像文本检索中的分布到点匹配。
IF 18.6 Pub Date : 2026-02-13 DOI: 10.1109/TPAMI.2026.3664613
Zheng Wang, Xing Xu, Lei Zhu, Jingkuan Song, Yang Yang, Heng Tao Shen

Eliminating semantic discrepancy between different modalities is the ultimate goal of image text retrieval. However, most of the existing methods only focus on retrieval of the ground-truth instance while ignoring those semantically similar instances yet unlabeled as positives, which causes the phenomenon of one-to-many correspondence. The mainstream solutions of this research are mainly based on uncertainty learning and the exploration of one-to-many correspondence is still insufficient albeit their significant progress. Therefore, this work develops a novel Distribution-to-Points (termed D2P) matching mechanism for image-text retrieval to capture the one-to-many correspondence between multiple samples and a given query via hypergraph modeling. Specifically, a given query is first mapped as a probabilistic embedding to learn its true semantic distribution based on Mahalanobis distance. Then each candidate instance in a mini-batch is regarded as a hypergraph node with its mean semantics while a Gaussian query is modeled as a hyperedge to capture the semantic correlations beyond the pair between candidate points and the query. Moreover, an energy-based semantic modeling framework is developed to pull all similar candidates (not only the ground truth) close to their query while pushing those dissimilar ones far away. In the end, distribution-to-points matching is learned based on the similarity measurement over the Mahalanobis distance, which considers semantic variance to perform many-to-one correspondence well. Experimental results on several widely used datasets and under various evaluation metrics confirm our superiority and effectiveness in improving the retrieval ability of the baseline including ground-truth matching and semantic multiplicity for image text retrieval.

消除不同模态之间的语义差异是图像文本检索的最终目标。然而,现有的方法大多只关注基真实例的检索,而忽略了语义相似但未标记为正的实例,导致一对多对应的现象。该研究的主流解决方案主要基于不确定性学习,一对多对应的探索虽然取得了重大进展,但仍然不足。因此,本工作开发了一种新的分布到点(D2P)匹配机制,用于图像-文本检索,通过超图建模捕获多个样本和给定查询之间的一对多对应关系。具体而言,首先将给定的查询映射为基于马氏距离的概率嵌入,以学习其真实的语义分布。然后将小批量中的每个候选实例视为具有平均语义的超图节点,将高斯查询建模为超边缘,以捕获候选点与查询之间超出对的语义相关性。此外,开发了一个基于能量的语义建模框架,将所有相似的候选对象(不仅仅是基本事实)拉近他们的查询,同时将那些不相似的候选对象推远。最后,基于马氏距离上的相似度度量来学习分布到点的匹配,该方法考虑了语义方差,可以很好地实现多对一对应。在几个广泛使用的数据集上和在各种评价指标下的实验结果证实了我们在提高基线检索能力方面的优势和有效性,包括图像文本检索的真值匹配和语义多重性。
{"title":"Distribution-to-Points Matching for Image Text Retrieval.","authors":"Zheng Wang, Xing Xu, Lei Zhu, Jingkuan Song, Yang Yang, Heng Tao Shen","doi":"10.1109/TPAMI.2026.3664613","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3664613","url":null,"abstract":"<p><p>Eliminating semantic discrepancy between different modalities is the ultimate goal of image text retrieval. However, most of the existing methods only focus on retrieval of the ground-truth instance while ignoring those semantically similar instances yet unlabeled as positives, which causes the phenomenon of one-to-many correspondence. The mainstream solutions of this research are mainly based on uncertainty learning and the exploration of one-to-many correspondence is still insufficient albeit their significant progress. Therefore, this work develops a novel Distribution-to-Points (termed D2P) matching mechanism for image-text retrieval to capture the one-to-many correspondence between multiple samples and a given query via hypergraph modeling. Specifically, a given query is first mapped as a probabilistic embedding to learn its true semantic distribution based on Mahalanobis distance. Then each candidate instance in a mini-batch is regarded as a hypergraph node with its mean semantics while a Gaussian query is modeled as a hyperedge to capture the semantic correlations beyond the pair between candidate points and the query. Moreover, an energy-based semantic modeling framework is developed to pull all similar candidates (not only the ground truth) close to their query while pushing those dissimilar ones far away. In the end, distribution-to-points matching is learned based on the similarity measurement over the Mahalanobis distance, which considers semantic variance to perform many-to-one correspondence well. Experimental results on several widely used datasets and under various evaluation metrics confirm our superiority and effectiveness in improving the retrieval ability of the baseline including ground-truth matching and semantic multiplicity for image text retrieval.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146196000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft Label Pruning and Quantization for Large-Scale Dataset Distillation. 大规模数据集蒸馏的软标签修剪与量化。
IF 18.6 Pub Date : 2026-02-13 DOI: 10.1109/TPAMI.2026.3664488
Lingao Xiao, Yang He

Large-scale dataset distillation requires storing auxiliary soft labels that can be 30-40× (ImageNet-1K) or 200× (ImageNet-21K) larger than the condensed images, undermining the goal of dataset compression. We identify two fundamental issues necessitating such extensive labels: (1) insufficient image diversity, where high within-class similarity in synthetic images requires extensive augmentation, and (2) insufficient supervision diversity, where limited variety in supervisory signals during training leads to performance degradation at high compression rates. To address these challenges, we propose Label Pruning and Quantization for Large-scale Distillation (LPQLD). We enhance image diversity via class-wise batching and BN supervision during synthesis. For supervision diversity, we introduce Label Pruning with Dynamic Knowledge Reuse to enhance label-per-augmentation diversity, and Label Quantization with Calibrated Student-Teacher Alignment to enhance augmentation-per-image diversity. Our approach reduces soft label storage by 78× on ImageNet-1K and 500× on ImageNet-21K while improving accuracy by up to 7.2% and 2.8%, respectively. Extensive experiments validate the superiority of LPQLD across different network architectures and other dataset distillation methods.

大规模数据集蒸馏需要存储比压缩图像大30-40倍(ImageNet-1K)或200倍(ImageNet-21K)的辅助软标签,这破坏了数据集压缩的目标。我们确定了需要如此广泛标签的两个基本问题:(1)图像多样性不足,合成图像的高类内相似性需要大量增强;(2)监督多样性不足,训练过程中监督信号的有限变化导致高压缩率下的性能下降。为了解决这些挑战,我们提出了大规模蒸馏(LPQLD)的标签修剪和量化。在合成过程中,我们通过分类分批和BN监督来增强图像多样性。对于监督多样性,我们引入了基于动态知识重用的标签修剪来增强每幅增强的标签多样性,以及基于校准的师生对齐的标签量化来增强每幅图像增强的多样性。我们的方法在ImageNet-1K上减少了78x的软标签存储空间,在ImageNet-21K上减少了500x的软标签存储空间,同时分别将精度提高了7.2%和2.8%。大量的实验验证了LPQLD在不同网络架构和其他数据集蒸馏方法中的优越性。
{"title":"Soft Label Pruning and Quantization for Large-Scale Dataset Distillation.","authors":"Lingao Xiao, Yang He","doi":"10.1109/TPAMI.2026.3664488","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3664488","url":null,"abstract":"<p><p>Large-scale dataset distillation requires storing auxiliary soft labels that can be 30-40× (ImageNet-1K) or 200× (ImageNet-21K) larger than the condensed images, undermining the goal of dataset compression. We identify two fundamental issues necessitating such extensive labels: (1) insufficient image diversity, where high within-class similarity in synthetic images requires extensive augmentation, and (2) insufficient supervision diversity, where limited variety in supervisory signals during training leads to performance degradation at high compression rates. To address these challenges, we propose Label Pruning and Quantization for Large-scale Distillation (LPQLD). We enhance image diversity via class-wise batching and BN supervision during synthesis. For supervision diversity, we introduce Label Pruning with Dynamic Knowledge Reuse to enhance label-per-augmentation diversity, and Label Quantization with Calibrated Student-Teacher Alignment to enhance augmentation-per-image diversity. Our approach reduces soft label storage by 78× on ImageNet-1K and 500× on ImageNet-21K while improving accuracy by up to 7.2% and 2.8%, respectively. Extensive experiments validate the superiority of LPQLD across different network architectures and other dataset distillation methods.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146196136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
50 Years of Automated Face Recognition. 50年的自动人脸识别。
IF 18.6 Pub Date : 2026-02-13 DOI: 10.1109/TPAMI.2026.3664269
Minchul Kim, Anil Jain, Xiaoming Liu

Over the past five decades, automated face recognition (FR) has progressed from handcrafted geometric and statistical approaches to advanced deep learning architectures that now approach, and in many cases exceed, human performance. This paper traces the historical and technological evolution of FR, encompassing early algorithmic paradigms through to contemporary neural systems trained on extensive real and synthetically generated datasets. We examine pivotal innovations that have driven this progression, including advances in dataset construction, loss function formulation, network architecture design, and feature fusion strategies. Furthermore, we analyze the relationship between data scale, diversity, and model generalization, highlighting how dataset expansion correlates with benchmark performance gains. Recent systems have achieved near-perfect large-scale identification accuracy, with the leading algorithm in the latest NIST FRTE 1:N benchmark reporting a False Negative Identification Rate (FNIR) of 0.15 percent at False Positive Identification Rate (FPIR) of 0.001 on a gallery of over 10 million identities . Larger galleries increase false positive rates and deployments at greater scales will see higher error rates. We delineate key open problems and emerging directions, including scalable training, multi-modal fusion, synthetic data, and interpretable recognition frameworks.

在过去的五十年里,自动人脸识别(FR)已经从手工制作的几何和统计方法发展到先进的深度学习架构,现在已经接近并在许多情况下超过了人类的表现。本文追溯了人工神经网络的历史和技术演变,包括早期的算法范式,以及在广泛的真实和综合生成的数据集上训练的当代神经系统。我们研究了推动这一进步的关键创新,包括数据集构建、损失函数公式、网络架构设计和特征融合策略方面的进展。此外,我们分析了数据规模、多样性和模型泛化之间的关系,强调了数据集扩展如何与基准性能增益相关。最近的系统已经实现了近乎完美的大规模识别精度,在最新的NIST FRTE 1:N基准测试中,领先的算法在超过1000万个身份的库中报告了假阴性识别率(FNIR)为0.15%,假阳性识别率(FPIR)为0.001。更大的库增加误报率,更大规模的部署将看到更高的错误率。我们描述了关键的开放问题和新兴方向,包括可扩展的训练、多模态融合、合成数据和可解释的识别框架。
{"title":"50 Years of Automated Face Recognition.","authors":"Minchul Kim, Anil Jain, Xiaoming Liu","doi":"10.1109/TPAMI.2026.3664269","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3664269","url":null,"abstract":"<p><p>Over the past five decades, automated face recognition (FR) has progressed from handcrafted geometric and statistical approaches to advanced deep learning architectures that now approach, and in many cases exceed, human performance. This paper traces the historical and technological evolution of FR, encompassing early algorithmic paradigms through to contemporary neural systems trained on extensive real and synthetically generated datasets. We examine pivotal innovations that have driven this progression, including advances in dataset construction, loss function formulation, network architecture design, and feature fusion strategies. Furthermore, we analyze the relationship between data scale, diversity, and model generalization, highlighting how dataset expansion correlates with benchmark performance gains. Recent systems have achieved near-perfect large-scale identification accuracy, with the leading algorithm in the latest NIST FRTE 1:N benchmark reporting a False Negative Identification Rate (FNIR) of 0.15 percent at False Positive Identification Rate (FPIR) of 0.001 on a gallery of over 10 million identities . Larger galleries increase false positive rates and deployments at greater scales will see higher error rates. We delineate key open problems and emerging directions, including scalable training, multi-modal fusion, synthetic data, and interpretable recognition frameworks.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146196074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Penny-Wise and Pound-Foolish in AI-Generated Image Detection. 人工智能生成图像检测中的小亏大亏。
IF 18.6 Pub Date : 2026-02-13 DOI: 10.1109/TPAMI.2026.3664388
Yabin Wang, Zhiwu Huang, Zhou Su, Adam Prugel-Bennett, Xiaopeng Hong

The rise of AI-generated images has sparked serious concerns about their potential misuse across various domains, prompting the urgent need for robust detection methods. Despite advancements, many current approaches prioritize short-term gains at the expense of long-term effectiveness. This paper critiques the overly specialized approach of fine-tuning pre-trained models for short-term gains on a single AI image dataset, while disregarding the long-term imperative of achieving generalization and knowledge retention. To address this trade-off issue, we propose a novel learning framework (PoundNet) for the generalization of AI-generated image detection on a pre-trained vision-language model. PoundNet incorporates a learnable prompt design and a balanced objective to preserve broad knowledge from upstream tasks (object classification) while enhancing generalization for downstream tasks (AI-generated image detection). We train PoundNet on a single standard AI image dataset, following common practice in the literature. We then evaluate its performance across 10 large-scale public AI-generated image detection datasets with 5 main evaluation metrics, forming the largest benchmark test set for assessing the generalization ability of AI-generated image detection models, to our knowledge. The comprehensive benchmark evaluation demonstrates that PoundNet successfully balances generalization with knowledge retention, achieving a remarkable relative improvement of 19% in AI-generated image detection performance compared to state-of-the-art methods, while maintaining a strong performance of 63% on object classification tasks. The source code and data are available at https://github.com/iamwangyabin/PoundNet.

人工智能生成图像的兴起引发了人们对其在各个领域可能被滥用的严重担忧,促使人们迫切需要强大的检测方法。尽管取得了进步,但目前的许多方法优先考虑短期收益,而牺牲了长期有效性。本文批评了过度专业化的方法,即在单个AI图像数据集上微调预训练模型以获得短期收益,而忽视了实现泛化和知识保留的长期必要性。为了解决这个权衡问题,我们提出了一个新的学习框架(PoundNet),用于在预训练的视觉语言模型上泛化人工智能生成的图像检测。PoundNet结合了一个可学习的提示设计和一个平衡的目标,以保持上游任务(对象分类)的广泛知识,同时增强下游任务(人工智能生成的图像检测)的泛化。我们在单个标准AI图像数据集上训练PoundNet,遵循文献中的常见做法。然后,我们用5个主要评估指标在10个大规模公共人工智能生成的图像检测数据集上评估其性能,形成了据我们所知评估人工智能生成的图像检测模型泛化能力的最大基准测试集。综合基准评估表明,PoundNet成功地平衡了泛化和知识保留,与最先进的方法相比,人工智能生成的图像检测性能相对提高了19%,同时在对象分类任务上保持了63%的强劲性能。源代码和数据可从https://github.com/iamwangyabin/PoundNet获得。
{"title":"Penny-Wise and Pound-Foolish in AI-Generated Image Detection.","authors":"Yabin Wang, Zhiwu Huang, Zhou Su, Adam Prugel-Bennett, Xiaopeng Hong","doi":"10.1109/TPAMI.2026.3664388","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3664388","url":null,"abstract":"<p><p>The rise of AI-generated images has sparked serious concerns about their potential misuse across various domains, prompting the urgent need for robust detection methods. Despite advancements, many current approaches prioritize short-term gains at the expense of long-term effectiveness. This paper critiques the overly specialized approach of fine-tuning pre-trained models for short-term gains on a single AI image dataset, while disregarding the long-term imperative of achieving generalization and knowledge retention. To address this trade-off issue, we propose a novel learning framework (PoundNet) for the generalization of AI-generated image detection on a pre-trained vision-language model. PoundNet incorporates a learnable prompt design and a balanced objective to preserve broad knowledge from upstream tasks (object classification) while enhancing generalization for downstream tasks (AI-generated image detection). We train PoundNet on a single standard AI image dataset, following common practice in the literature. We then evaluate its performance across 10 large-scale public AI-generated image detection datasets with 5 main evaluation metrics, forming the largest benchmark test set for assessing the generalization ability of AI-generated image detection models, to our knowledge. The comprehensive benchmark evaluation demonstrates that PoundNet successfully balances generalization with knowledge retention, achieving a remarkable relative improvement of 19% in AI-generated image detection performance compared to state-of-the-art methods, while maintaining a strong performance of 63% on object classification tasks. The source code and data are available at https://github.com/iamwangyabin/PoundNet.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146196038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrating Biased Distribution in VFM-derived Latent Space via Cross-Domain Geometric Consistency. 利用跨域几何一致性校正vfm衍生潜空间中的偏置分布。
IF 18.6 Pub Date : 2026-02-09 DOI: 10.1109/TPAMI.2026.3662389
Yanbiao Ma, Wei Dai, Zhiwu Lu, Bowei Liu, Jiayi Chen, Wenke Huang, Junchi Yan, Guancheng Wan

Despite the fast progress of deep learning, one standing challenge is the gap of the observed training samples and the underlying true distribution. There are multiple reasons for the causing of this gap e.g., sampling bias, noise etc. In the era of foundation models, we show that when leveraging the off-the-shelf (vision) foundation models (e.g., CLIP, DINOv2) for feature extraction, the geometric shapes of the resulting feature distributions exhibit remarkable transferability across domains and datasets. To verify its practical usefulness, we embody our geometric knowledge-guided distribution calibration framework in two popular and challenging settings: federated learning and long-tailed recognition. In the federated setting, we devise a technique of acquiring the global geometric shape under privacy constraints, then leverage this knowledge to generate new samples for clients, in the aim of bridging the gap between local and global observations. In long-tailed learning, it utilizes the geometric knowledge transferred from sample-rich categories to recover the true distribution for sample-scarce tail classes. Comprehensive experiments show that our proposed geometric knowledge-guided distribution calibration effectively overcomes information deficits caused by data heterogeneity and sample imbalance, with boosted performance across benchmarks. Code published at: https://github.com/WeiDai-David/2025CVPR GGEUR.

尽管深度学习进展迅速,但一个长期存在的挑战是观察到的训练样本与潜在真实分布之间的差距。造成这种差距的原因有很多,例如采样偏差、噪声等。在基础模型时代,我们表明,当利用现成的(视觉)基础模型(例如CLIP, DINOv2)进行特征提取时,所得到的特征分布的几何形状在域和数据集之间表现出显著的可转移性。为了验证其实用性,我们将几何知识引导的分布校准框架体现在两种流行且具有挑战性的环境中:联邦学习和长尾识别。在联邦设置中,我们设计了一种在隐私约束下获取全局几何形状的技术,然后利用这些知识为客户生成新的样本,目的是弥合本地和全局观察之间的差距。在长尾学习中,它利用从样本丰富的类别中转移的几何知识来恢复样本稀缺的尾部类别的真实分布。综合实验表明,我们提出的几何知识引导的分布校准方法有效地克服了数据异质性和样本不平衡导致的信息缺陷,提高了基准测试的性能。代码发布于:https://github.com/WeiDai-David/2025CVPR GGEUR。
{"title":"Calibrating Biased Distribution in VFM-derived Latent Space via Cross-Domain Geometric Consistency.","authors":"Yanbiao Ma, Wei Dai, Zhiwu Lu, Bowei Liu, Jiayi Chen, Wenke Huang, Junchi Yan, Guancheng Wan","doi":"10.1109/TPAMI.2026.3662389","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3662389","url":null,"abstract":"<p><p>Despite the fast progress of deep learning, one standing challenge is the gap of the observed training samples and the underlying true distribution. There are multiple reasons for the causing of this gap e.g., sampling bias, noise etc. In the era of foundation models, we show that when leveraging the off-the-shelf (vision) foundation models (e.g., CLIP, DINOv2) for feature extraction, the geometric shapes of the resulting feature distributions exhibit remarkable transferability across domains and datasets. To verify its practical usefulness, we embody our geometric knowledge-guided distribution calibration framework in two popular and challenging settings: federated learning and long-tailed recognition. In the federated setting, we devise a technique of acquiring the global geometric shape under privacy constraints, then leverage this knowledge to generate new samples for clients, in the aim of bridging the gap between local and global observations. In long-tailed learning, it utilizes the geometric knowledge transferred from sample-rich categories to recover the true distribution for sample-scarce tail classes. Comprehensive experiments show that our proposed geometric knowledge-guided distribution calibration effectively overcomes information deficits caused by data heterogeneity and sample imbalance, with boosted performance across benchmarks. Code published at: https://github.com/WeiDai-David/2025CVPR GGEUR.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146151591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASIL: Augmented Structural Information Learning for Deep Graph Clustering in Hyperbolic Space. 双曲空间中深度图聚类的增强结构信息学习。
IF 18.6 Pub Date : 2026-02-06 DOI: 10.1109/TPAMI.2026.3661424
Li Sun, Zhenhao Huang, Yujie Wang, Hongbo Lv, Chunyang Liu, Hao Peng, Philip S Yu

Graph clustering is a longstanding topic in machine learning. In recent years, deep learning methods have achieved encouraging results, but they still require predefined cluster numbers $K$, and typically struggle with imbalanced graphs, especially in identifying minority clusters. The limitations motivate us to study a challenging yet practical problem: deep graph clustering without $K$ considering the imbalance in reality. We approach this problem from a fresh perspective of information theory (i.e., structural information). In the literature, structural information has rarely been touched in deep clustering, and the classic definition falls short in its discrete formulation, neglecting node attributes and exhibiting prohibitive complexity. In this paper, we first establish a differentiable structural information, generalizing the discrete formalism to continuous realm, so that we design a hyperbolic deep model (LSEnet) to learn the neural partitioning tree in the Lorentz model of hyperbolic space. Theoretically, we demonstrate its capability in clustering without requiring $K$ and identifying minority clusters in imbalanced graphs. Second, we refine hyperbolic representations of the partitioning tree, enhancing graph semantics, for better clustering. Contrastive learning for tree structures is non-trivial and costs quadratic complexity. Instead, we further advance our theory by discovering an interesting fact that structural entropy indeed bounds the tree contrastive loss. Finally, with an efficient reformulation, we approach graph clustering through a novel augmented structural information learning (ASIL), which offers a simple yet effective objective of augmented structural entropy to seamlessly integrates hyperbolic partitioning tree construction and contrastive learning. With a provable improvement in graph conductance, ASIL achieves effective debiased graph clustering in linear complexity with respect to the graph size. Extensive experiments show the ASIL outperforms 20 strong baselines by an average of $+12.42%$ in NMI on Citeseer dataset.

图聚类是机器学习中一个由来已久的话题。近年来,深度学习方法取得了令人鼓舞的结果,但它们仍然需要预定义的聚类数$K$,并且通常在不平衡图中挣扎,特别是在识别少数聚类时。这些限制促使我们研究一个具有挑战性但实际的问题:考虑到现实中的不平衡,没有K的深度图聚类。我们从信息论(即结构信息)的新视角来解决这个问题。在文献中,深度聚类中很少涉及结构信息,经典的定义在其离散化表述中存在不足,忽略了节点属性并表现出令人望而却步的复杂性。本文首先建立了一个可微的结构信息,将离散形式推广到连续领域,从而设计了一个双曲深度模型(LSEnet)来学习双曲空间Lorentz模型中的神经划分树。从理论上讲,我们证明了它在不需要$K$的情况下聚类的能力,以及在不平衡图中识别少数簇的能力。其次,我们改进了分区树的双曲表示,增强了图语义,以获得更好的聚类。树形结构的对比学习是非平凡的,并且耗费二次复杂度。相反,我们通过发现一个有趣的事实进一步推进了我们的理论,即结构熵确实限制了树的对比损失。最后,通过有效的重新表述,我们通过一种新的增强结构信息学习(ASIL)来实现图聚类,该方法提供了一个简单而有效的增强结构熵目标,将双曲划分树构造和对比学习无缝集成。通过对图电导的可证明的改进,ASIL在相对于图大小的线性复杂度上实现了有效的无偏差图聚类。广泛的实验表明,在Citeseer数据集的NMI中,ASIL比20个强基线平均高出12.42 %。
{"title":"ASIL: Augmented Structural Information Learning for Deep Graph Clustering in Hyperbolic Space.","authors":"Li Sun, Zhenhao Huang, Yujie Wang, Hongbo Lv, Chunyang Liu, Hao Peng, Philip S Yu","doi":"10.1109/TPAMI.2026.3661424","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3661424","url":null,"abstract":"<p><p>Graph clustering is a longstanding topic in machine learning. In recent years, deep learning methods have achieved encouraging results, but they still require predefined cluster numbers $K$, and typically struggle with imbalanced graphs, especially in identifying minority clusters. The limitations motivate us to study a challenging yet practical problem: deep graph clustering without $K$ considering the imbalance in reality. We approach this problem from a fresh perspective of information theory (i.e., structural information). In the literature, structural information has rarely been touched in deep clustering, and the classic definition falls short in its discrete formulation, neglecting node attributes and exhibiting prohibitive complexity. In this paper, we first establish a differentiable structural information, generalizing the discrete formalism to continuous realm, so that we design a hyperbolic deep model (LSEnet) to learn the neural partitioning tree in the Lorentz model of hyperbolic space. Theoretically, we demonstrate its capability in clustering without requiring $K$ and identifying minority clusters in imbalanced graphs. Second, we refine hyperbolic representations of the partitioning tree, enhancing graph semantics, for better clustering. Contrastive learning for tree structures is non-trivial and costs quadratic complexity. Instead, we further advance our theory by discovering an interesting fact that structural entropy indeed bounds the tree contrastive loss. Finally, with an efficient reformulation, we approach graph clustering through a novel augmented structural information learning (ASIL), which offers a simple yet effective objective of augmented structural entropy to seamlessly integrates hyperbolic partitioning tree construction and contrastive learning. With a provable improvement in graph conductance, ASIL achieves effective debiased graph clustering in linear complexity with respect to the graph size. Extensive experiments show the ASIL outperforms 20 strong baselines by an average of $+12.42%$ in NMI on Citeseer dataset.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FC$^{2}$: Fast Co-Clustering With Small-Scale Similarity Graph and Bipartite Graph Learning. FC$^{2}$:基于小尺度相似图和二部图学习的快速共聚类。
IF 18.6 Pub Date : 2026-02-06 DOI: 10.1109/TPAMI.2026.3661650
Xiaowei Zhao, Linrui Xie, Xiaojun Chang, Feiping Nie, Qiang Zhang

Bipartite graph-based co-clustering is efficient in modeling cluster manifold structures. However, existing methods decouple bipartite graph construction from the learning of pseudo-labels for samples and anchors, often leading to suboptimal clustering performance. Moreover, neglecting local manifold relationships among anchors yields inferior anchor pseudo-labels, which further degrades the quality of sample pseudo-labels. To overcome these limitations, we propose a novel model termed Fast Co-Clustering (FC$^{2}$), which jointly captures both local and global correlations between samples and anchors. Specifically, to model the coupling between the one-hot pseudo-labels of samples and anchors, we construct a bipartite graph with adaptively updated weights during the clustering process. To prevent severely imbalanced cluster assignments, we prove the equivalence between maximizing pseudo-label covariance and balancing cluster proportions, and incorporate a balanced regularization term to enhance the rationality of the resulting clusters. Furthermore, the local smoothness of anchor pseudo-labels is preserved via a low-rank decomposition of a compact anchor similarity graph. These two components jointly ensure that spatially adjacent anchors tend to share similar cluster identities, and that samples and anchors in close proximity are also assigned to similar clusters. We develop an efficient iterative optimization algorithm to update all model variables. Extensive experiments on benchmark and synthetic datasets validate the superior performance and efficiency of the proposed method compared with state-of-the-art approaches. Code is available at https://github.com/Vince-Doit/FC2.

基于二部图的共聚类是一种高效的聚类流形结构建模方法。然而,现有的方法将二部图的构建与样本和锚点的伪标签的学习分离开来,经常导致次优的聚类性能。此外,忽略锚点之间的局部流形关系会产生较差的锚点伪标签,从而进一步降低样本伪标签的质量。为了克服这些限制,我们提出了一种称为快速共聚类(FC$^{2}$)的新模型,该模型可以联合捕获样本和锚点之间的局部和全局相关性。具体来说,为了模拟样本的单热伪标签与锚点之间的耦合,我们在聚类过程中构造了一个权值自适应更新的二部图。为了防止严重不平衡的聚类分配,我们证明了最大化伪标签协方差和平衡聚类比例之间的等价性,并加入了一个平衡正则化项来提高聚类分配的合理性。此外,通过对紧凑的锚点相似图进行低秩分解,保持了锚点伪标签的局部平滑性。这两个组件共同确保空间相邻的锚点倾向于共享相似的集群身份,并且靠近的样本和锚点也被分配到相似的集群。我们开发了一种有效的迭代优化算法来更新所有模型变量。在基准和合成数据集上进行的大量实验验证了该方法的优越性能和效率。代码可从https://github.com/Vince-Doit/FC2获得。
{"title":"FC$^{2}$: Fast Co-Clustering With Small-Scale Similarity Graph and Bipartite Graph Learning.","authors":"Xiaowei Zhao, Linrui Xie, Xiaojun Chang, Feiping Nie, Qiang Zhang","doi":"10.1109/TPAMI.2026.3661650","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3661650","url":null,"abstract":"<p><p>Bipartite graph-based co-clustering is efficient in modeling cluster manifold structures. However, existing methods decouple bipartite graph construction from the learning of pseudo-labels for samples and anchors, often leading to suboptimal clustering performance. Moreover, neglecting local manifold relationships among anchors yields inferior anchor pseudo-labels, which further degrades the quality of sample pseudo-labels. To overcome these limitations, we propose a novel model termed Fast Co-Clustering (FC$^{2}$), which jointly captures both local and global correlations between samples and anchors. Specifically, to model the coupling between the one-hot pseudo-labels of samples and anchors, we construct a bipartite graph with adaptively updated weights during the clustering process. To prevent severely imbalanced cluster assignments, we prove the equivalence between maximizing pseudo-label covariance and balancing cluster proportions, and incorporate a balanced regularization term to enhance the rationality of the resulting clusters. Furthermore, the local smoothness of anchor pseudo-labels is preserved via a low-rank decomposition of a compact anchor similarity graph. These two components jointly ensure that spatially adjacent anchors tend to share similar cluster identities, and that samples and anchors in close proximity are also assigned to similar clusters. We develop an efficient iterative optimization algorithm to update all model variables. Extensive experiments on benchmark and synthetic datasets validate the superior performance and efficiency of the proposed method compared with state-of-the-art approaches. Code is available at https://github.com/Vince-Doit/FC2.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Matrix Completion With Deterministic Sampling Via Convex Optimization. 基于凸优化的确定性采样鲁棒矩阵补全。
IF 18.6 Pub Date : 2026-01-29 DOI: 10.1109/TPAMI.2026.3659200
Yinjian Wang, Wei Li, James E Fowler, Gemine Vivone

The problem of robust matrix completion-the recovery of a low-rank matrix and a sparse matrix from a sampling of their superposition-has been addressed extensively in prior literature. Yet, much of this work has focused exclusively on the case in which the matrix sampling is done at random, as this scenario is amenable to theoretical analysis. In contrast, sampling with an arbitrary deterministic pattern is often more accommodating to hardware implementation; consequently, the problem of robust matrix completion under deterministic sampling is considered. To this end, a restricted approximate isometry property is proposed and used, along with a modified golfing scheme and a slightly strengthened incoherence condition, to prove that the latent low-rank and sparse matrices are uniquely recoverable via convex optimization with asymptotically high probability, providing the first exact-recovery theory for robust matrix completion with arbitrary deterministic sampling. A corresponding convex-optimization algorithm, driven by a traditional nuclear norm, is developed and then subsequently generalized by substituting a convolutional nuclear norm in order to cover a broader range of application scenarios. Empirical experiments on synthetic data verify the proposed theory while a battery of results on real-world images demonstrate the practical efficacy of the generalized algorithm for robust matrix recovery.

鲁棒矩阵补全问题——从低秩矩阵和稀疏矩阵的叠加抽样中恢复它们——已经在先前的文献中得到了广泛的讨论。然而,这项工作的大部分都集中在矩阵随机抽样的情况下,因为这种情况适合于理论分析。相比之下,使用任意确定性模式的采样通常更适合硬件实现;因此,考虑了确定性采样下的鲁棒矩阵补全问题。为此,本文提出并利用一个受限的近似等距性质,结合改进的高尔夫格式和稍微增强的非相干条件,证明了潜在的低秩稀疏矩阵通过凸优化具有渐近高概率的唯一可恢复性,为任意确定性采样鲁棒矩阵补全提供了第一个精确恢复理论。在传统核范数的驱动下,提出了相应的凸优化算法,然后通过替换卷积核范数进行推广,以覆盖更广泛的应用场景。在合成数据上的经验实验验证了所提出的理论,而在真实世界图像上的一系列结果证明了广义算法对鲁棒矩阵恢复的实际有效性。
{"title":"Robust Matrix Completion With Deterministic Sampling Via Convex Optimization.","authors":"Yinjian Wang, Wei Li, James E Fowler, Gemine Vivone","doi":"10.1109/TPAMI.2026.3659200","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3659200","url":null,"abstract":"<p><p>The problem of robust matrix completion-the recovery of a low-rank matrix and a sparse matrix from a sampling of their superposition-has been addressed extensively in prior literature. Yet, much of this work has focused exclusively on the case in which the matrix sampling is done at random, as this scenario is amenable to theoretical analysis. In contrast, sampling with an arbitrary deterministic pattern is often more accommodating to hardware implementation; consequently, the problem of robust matrix completion under deterministic sampling is considered. To this end, a restricted approximate isometry property is proposed and used, along with a modified golfing scheme and a slightly strengthened incoherence condition, to prove that the latent low-rank and sparse matrices are uniquely recoverable via convex optimization with asymptotically high probability, providing the first exact-recovery theory for robust matrix completion with arbitrary deterministic sampling. A corresponding convex-optimization algorithm, driven by a traditional nuclear norm, is developed and then subsequently generalized by substituting a convolutional nuclear norm in order to cover a broader range of application scenarios. Empirical experiments on synthetic data verify the proposed theory while a battery of results on real-world images demonstrate the practical efficacy of the generalized algorithm for robust matrix recovery.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tackling Ill-Posedness of Reversible Image Conversion With Well-Posed Invertible Network. 用良定可逆网络解决可逆图像转换的病态性。
IF 18.6 Pub Date : 2026-01-29 DOI: 10.1109/TPAMI.2026.3659125
Yuanfei Huang, Hua Huang

Reversible image conversion (RIC) suffers from ill-posedness issues due to its forward conversion process being considered an underdetermined system. Despite employing invertible neural networks (INN), existing RIC methods intrinsically remain ill-posed as inevitably introducing uncertainty by incorporating randomly sampled variables. To tackle the ill-posedness dilemma, we focus on developing a reliable approximate left inverse for the underdetermined system by constructing an overdetermined system with a non-zero Gram determinant, thus ensuring a well-posed solution. Based on this principle, we propose a well-posed invertible $1times 1$ convolution (WIC), which eliminates the reliance on random variable sampling and enables the development of well-posed invertible networks. Furthermore, we design two innovative networks, WIN-Naïve and WIN, with the latter incorporating advanced skip-connections to enhance long-term memory. Our methods are evaluated across diverse RIC tasks, including reversible image hiding, image rescaling, and image decolorization, consistently achieving state-of-the-art performance. Extensive experiments validate the effectiveness of our approach, demonstrating its ability to overcome the bottlenecks of existing RIC solutions and setting a new benchmark in the field. Codes are available in https://github.com/BNU-ERC-ITEA/WIN.

可逆图像转换(RIC)由于其前向转换过程被认为是一个欠定系统,因此存在病态问题。尽管采用了可逆神经网络(INN),但现有的RIC方法本质上仍然是病态的,因为它不可避免地引入了随机抽样变量的不确定性。为了解决病态困境,我们着重于通过构造一个非零Gram行列式的过定系统来开发欠定系统的可靠近似左逆,从而确保一个适定解。基于这一原理,我们提出了一个良定可逆$1 * 1$卷积(WIC),它消除了对随机变量采样的依赖,使良定可逆网络的发展成为可能。此外,我们设计了两个创新的网络,WIN-Naïve和WIN,后者结合了先进的跳跃连接来增强长期记忆。我们的方法在不同的RIC任务中进行了评估,包括可逆图像隐藏、图像重新缩放和图像脱色,始终如一地实现了最先进的性能。大量的实验验证了我们方法的有效性,证明了它能够克服现有RIC解决方案的瓶颈,并在该领域树立了新的基准。代码可在https://github.com/BNU-ERC-ITEA/WIN上获得。
{"title":"Tackling Ill-Posedness of Reversible Image Conversion With Well-Posed Invertible Network.","authors":"Yuanfei Huang, Hua Huang","doi":"10.1109/TPAMI.2026.3659125","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3659125","url":null,"abstract":"<p><p>Reversible image conversion (RIC) suffers from ill-posedness issues due to its forward conversion process being considered an underdetermined system. Despite employing invertible neural networks (INN), existing RIC methods intrinsically remain ill-posed as inevitably introducing uncertainty by incorporating randomly sampled variables. To tackle the ill-posedness dilemma, we focus on developing a reliable approximate left inverse for the underdetermined system by constructing an overdetermined system with a non-zero Gram determinant, thus ensuring a well-posed solution. Based on this principle, we propose a well-posed invertible $1times 1$ convolution (WIC), which eliminates the reliance on random variable sampling and enables the development of well-posed invertible networks. Furthermore, we design two innovative networks, WIN-Naïve and WIN, with the latter incorporating advanced skip-connections to enhance long-term memory. Our methods are evaluated across diverse RIC tasks, including reversible image hiding, image rescaling, and image decolorization, consistently achieving state-of-the-art performance. Extensive experiments validate the effectiveness of our approach, demonstrating its ability to overcome the bottlenecks of existing RIC solutions and setting a new benchmark in the field. Codes are available in https://github.com/BNU-ERC-ITEA/WIN.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1