首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
Calibrating Biased Distribution in VFM-derived Latent Space via Cross-Domain Geometric Consistency. 利用跨域几何一致性校正vfm衍生潜空间中的偏置分布。
IF 18.6 Pub Date : 2026-02-09 DOI: 10.1109/TPAMI.2026.3662389
Yanbiao Ma, Wei Dai, Zhiwu Lu, Bowei Liu, Jiayi Chen, Wenke Huang, Junchi Yan, Guancheng Wan

Despite the fast progress of deep learning, one standing challenge is the gap of the observed training samples and the underlying true distribution. There are multiple reasons for the causing of this gap e.g., sampling bias, noise etc. In the era of foundation models, we show that when leveraging the off-the-shelf (vision) foundation models (e.g., CLIP, DINOv2) for feature extraction, the geometric shapes of the resulting feature distributions exhibit remarkable transferability across domains and datasets. To verify its practical usefulness, we embody our geometric knowledge-guided distribution calibration framework in two popular and challenging settings: federated learning and long-tailed recognition. In the federated setting, we devise a technique of acquiring the global geometric shape under privacy constraints, then leverage this knowledge to generate new samples for clients, in the aim of bridging the gap between local and global observations. In long-tailed learning, it utilizes the geometric knowledge transferred from sample-rich categories to recover the true distribution for sample-scarce tail classes. Comprehensive experiments show that our proposed geometric knowledge-guided distribution calibration effectively overcomes information deficits caused by data heterogeneity and sample imbalance, with boosted performance across benchmarks. Code published at: https://github.com/WeiDai-David/2025CVPR GGEUR.

尽管深度学习进展迅速,但一个长期存在的挑战是观察到的训练样本与潜在真实分布之间的差距。造成这种差距的原因有很多,例如采样偏差、噪声等。在基础模型时代,我们表明,当利用现成的(视觉)基础模型(例如CLIP, DINOv2)进行特征提取时,所得到的特征分布的几何形状在域和数据集之间表现出显著的可转移性。为了验证其实用性,我们将几何知识引导的分布校准框架体现在两种流行且具有挑战性的环境中:联邦学习和长尾识别。在联邦设置中,我们设计了一种在隐私约束下获取全局几何形状的技术,然后利用这些知识为客户生成新的样本,目的是弥合本地和全局观察之间的差距。在长尾学习中,它利用从样本丰富的类别中转移的几何知识来恢复样本稀缺的尾部类别的真实分布。综合实验表明,我们提出的几何知识引导的分布校准方法有效地克服了数据异质性和样本不平衡导致的信息缺陷,提高了基准测试的性能。代码发布于:https://github.com/WeiDai-David/2025CVPR GGEUR。
{"title":"Calibrating Biased Distribution in VFM-derived Latent Space via Cross-Domain Geometric Consistency.","authors":"Yanbiao Ma, Wei Dai, Zhiwu Lu, Bowei Liu, Jiayi Chen, Wenke Huang, Junchi Yan, Guancheng Wan","doi":"10.1109/TPAMI.2026.3662389","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3662389","url":null,"abstract":"<p><p>Despite the fast progress of deep learning, one standing challenge is the gap of the observed training samples and the underlying true distribution. There are multiple reasons for the causing of this gap e.g., sampling bias, noise etc. In the era of foundation models, we show that when leveraging the off-the-shelf (vision) foundation models (e.g., CLIP, DINOv2) for feature extraction, the geometric shapes of the resulting feature distributions exhibit remarkable transferability across domains and datasets. To verify its practical usefulness, we embody our geometric knowledge-guided distribution calibration framework in two popular and challenging settings: federated learning and long-tailed recognition. In the federated setting, we devise a technique of acquiring the global geometric shape under privacy constraints, then leverage this knowledge to generate new samples for clients, in the aim of bridging the gap between local and global observations. In long-tailed learning, it utilizes the geometric knowledge transferred from sample-rich categories to recover the true distribution for sample-scarce tail classes. Comprehensive experiments show that our proposed geometric knowledge-guided distribution calibration effectively overcomes information deficits caused by data heterogeneity and sample imbalance, with boosted performance across benchmarks. Code published at: https://github.com/WeiDai-David/2025CVPR GGEUR.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146151591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASIL: Augmented Structural Information Learning for Deep Graph Clustering in Hyperbolic Space. 双曲空间中深度图聚类的增强结构信息学习。
IF 18.6 Pub Date : 2026-02-06 DOI: 10.1109/TPAMI.2026.3661424
Li Sun, Zhenhao Huang, Yujie Wang, Hongbo Lv, Chunyang Liu, Hao Peng, Philip S Yu

Graph clustering is a longstanding topic in machine learning. In recent years, deep learning methods have achieved encouraging results, but they still require predefined cluster numbers $K$, and typically struggle with imbalanced graphs, especially in identifying minority clusters. The limitations motivate us to study a challenging yet practical problem: deep graph clustering without $K$ considering the imbalance in reality. We approach this problem from a fresh perspective of information theory (i.e., structural information). In the literature, structural information has rarely been touched in deep clustering, and the classic definition falls short in its discrete formulation, neglecting node attributes and exhibiting prohibitive complexity. In this paper, we first establish a differentiable structural information, generalizing the discrete formalism to continuous realm, so that we design a hyperbolic deep model (LSEnet) to learn the neural partitioning tree in the Lorentz model of hyperbolic space. Theoretically, we demonstrate its capability in clustering without requiring $K$ and identifying minority clusters in imbalanced graphs. Second, we refine hyperbolic representations of the partitioning tree, enhancing graph semantics, for better clustering. Contrastive learning for tree structures is non-trivial and costs quadratic complexity. Instead, we further advance our theory by discovering an interesting fact that structural entropy indeed bounds the tree contrastive loss. Finally, with an efficient reformulation, we approach graph clustering through a novel augmented structural information learning (ASIL), which offers a simple yet effective objective of augmented structural entropy to seamlessly integrates hyperbolic partitioning tree construction and contrastive learning. With a provable improvement in graph conductance, ASIL achieves effective debiased graph clustering in linear complexity with respect to the graph size. Extensive experiments show the ASIL outperforms 20 strong baselines by an average of $+12.42%$ in NMI on Citeseer dataset.

图聚类是机器学习中一个由来已久的话题。近年来,深度学习方法取得了令人鼓舞的结果,但它们仍然需要预定义的聚类数$K$,并且通常在不平衡图中挣扎,特别是在识别少数聚类时。这些限制促使我们研究一个具有挑战性但实际的问题:考虑到现实中的不平衡,没有K的深度图聚类。我们从信息论(即结构信息)的新视角来解决这个问题。在文献中,深度聚类中很少涉及结构信息,经典的定义在其离散化表述中存在不足,忽略了节点属性并表现出令人望而却步的复杂性。本文首先建立了一个可微的结构信息,将离散形式推广到连续领域,从而设计了一个双曲深度模型(LSEnet)来学习双曲空间Lorentz模型中的神经划分树。从理论上讲,我们证明了它在不需要$K$的情况下聚类的能力,以及在不平衡图中识别少数簇的能力。其次,我们改进了分区树的双曲表示,增强了图语义,以获得更好的聚类。树形结构的对比学习是非平凡的,并且耗费二次复杂度。相反,我们通过发现一个有趣的事实进一步推进了我们的理论,即结构熵确实限制了树的对比损失。最后,通过有效的重新表述,我们通过一种新的增强结构信息学习(ASIL)来实现图聚类,该方法提供了一个简单而有效的增强结构熵目标,将双曲划分树构造和对比学习无缝集成。通过对图电导的可证明的改进,ASIL在相对于图大小的线性复杂度上实现了有效的无偏差图聚类。广泛的实验表明,在Citeseer数据集的NMI中,ASIL比20个强基线平均高出12.42 %。
{"title":"ASIL: Augmented Structural Information Learning for Deep Graph Clustering in Hyperbolic Space.","authors":"Li Sun, Zhenhao Huang, Yujie Wang, Hongbo Lv, Chunyang Liu, Hao Peng, Philip S Yu","doi":"10.1109/TPAMI.2026.3661424","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3661424","url":null,"abstract":"<p><p>Graph clustering is a longstanding topic in machine learning. In recent years, deep learning methods have achieved encouraging results, but they still require predefined cluster numbers $K$, and typically struggle with imbalanced graphs, especially in identifying minority clusters. The limitations motivate us to study a challenging yet practical problem: deep graph clustering without $K$ considering the imbalance in reality. We approach this problem from a fresh perspective of information theory (i.e., structural information). In the literature, structural information has rarely been touched in deep clustering, and the classic definition falls short in its discrete formulation, neglecting node attributes and exhibiting prohibitive complexity. In this paper, we first establish a differentiable structural information, generalizing the discrete formalism to continuous realm, so that we design a hyperbolic deep model (LSEnet) to learn the neural partitioning tree in the Lorentz model of hyperbolic space. Theoretically, we demonstrate its capability in clustering without requiring $K$ and identifying minority clusters in imbalanced graphs. Second, we refine hyperbolic representations of the partitioning tree, enhancing graph semantics, for better clustering. Contrastive learning for tree structures is non-trivial and costs quadratic complexity. Instead, we further advance our theory by discovering an interesting fact that structural entropy indeed bounds the tree contrastive loss. Finally, with an efficient reformulation, we approach graph clustering through a novel augmented structural information learning (ASIL), which offers a simple yet effective objective of augmented structural entropy to seamlessly integrates hyperbolic partitioning tree construction and contrastive learning. With a provable improvement in graph conductance, ASIL achieves effective debiased graph clustering in linear complexity with respect to the graph size. Extensive experiments show the ASIL outperforms 20 strong baselines by an average of $+12.42%$ in NMI on Citeseer dataset.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FC$^{2}$: Fast Co-Clustering With Small-Scale Similarity Graph and Bipartite Graph Learning. FC$^{2}$:基于小尺度相似图和二部图学习的快速共聚类。
IF 18.6 Pub Date : 2026-02-06 DOI: 10.1109/TPAMI.2026.3661650
Xiaowei Zhao, Linrui Xie, Xiaojun Chang, Feiping Nie, Qiang Zhang

Bipartite graph-based co-clustering is efficient in modeling cluster manifold structures. However, existing methods decouple bipartite graph construction from the learning of pseudo-labels for samples and anchors, often leading to suboptimal clustering performance. Moreover, neglecting local manifold relationships among anchors yields inferior anchor pseudo-labels, which further degrades the quality of sample pseudo-labels. To overcome these limitations, we propose a novel model termed Fast Co-Clustering (FC$^{2}$), which jointly captures both local and global correlations between samples and anchors. Specifically, to model the coupling between the one-hot pseudo-labels of samples and anchors, we construct a bipartite graph with adaptively updated weights during the clustering process. To prevent severely imbalanced cluster assignments, we prove the equivalence between maximizing pseudo-label covariance and balancing cluster proportions, and incorporate a balanced regularization term to enhance the rationality of the resulting clusters. Furthermore, the local smoothness of anchor pseudo-labels is preserved via a low-rank decomposition of a compact anchor similarity graph. These two components jointly ensure that spatially adjacent anchors tend to share similar cluster identities, and that samples and anchors in close proximity are also assigned to similar clusters. We develop an efficient iterative optimization algorithm to update all model variables. Extensive experiments on benchmark and synthetic datasets validate the superior performance and efficiency of the proposed method compared with state-of-the-art approaches. Code is available at https://github.com/Vince-Doit/FC2.

基于二部图的共聚类是一种高效的聚类流形结构建模方法。然而,现有的方法将二部图的构建与样本和锚点的伪标签的学习分离开来,经常导致次优的聚类性能。此外,忽略锚点之间的局部流形关系会产生较差的锚点伪标签,从而进一步降低样本伪标签的质量。为了克服这些限制,我们提出了一种称为快速共聚类(FC$^{2}$)的新模型,该模型可以联合捕获样本和锚点之间的局部和全局相关性。具体来说,为了模拟样本的单热伪标签与锚点之间的耦合,我们在聚类过程中构造了一个权值自适应更新的二部图。为了防止严重不平衡的聚类分配,我们证明了最大化伪标签协方差和平衡聚类比例之间的等价性,并加入了一个平衡正则化项来提高聚类分配的合理性。此外,通过对紧凑的锚点相似图进行低秩分解,保持了锚点伪标签的局部平滑性。这两个组件共同确保空间相邻的锚点倾向于共享相似的集群身份,并且靠近的样本和锚点也被分配到相似的集群。我们开发了一种有效的迭代优化算法来更新所有模型变量。在基准和合成数据集上进行的大量实验验证了该方法的优越性能和效率。代码可从https://github.com/Vince-Doit/FC2获得。
{"title":"FC$^{2}$: Fast Co-Clustering With Small-Scale Similarity Graph and Bipartite Graph Learning.","authors":"Xiaowei Zhao, Linrui Xie, Xiaojun Chang, Feiping Nie, Qiang Zhang","doi":"10.1109/TPAMI.2026.3661650","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3661650","url":null,"abstract":"<p><p>Bipartite graph-based co-clustering is efficient in modeling cluster manifold structures. However, existing methods decouple bipartite graph construction from the learning of pseudo-labels for samples and anchors, often leading to suboptimal clustering performance. Moreover, neglecting local manifold relationships among anchors yields inferior anchor pseudo-labels, which further degrades the quality of sample pseudo-labels. To overcome these limitations, we propose a novel model termed Fast Co-Clustering (FC$^{2}$), which jointly captures both local and global correlations between samples and anchors. Specifically, to model the coupling between the one-hot pseudo-labels of samples and anchors, we construct a bipartite graph with adaptively updated weights during the clustering process. To prevent severely imbalanced cluster assignments, we prove the equivalence between maximizing pseudo-label covariance and balancing cluster proportions, and incorporate a balanced regularization term to enhance the rationality of the resulting clusters. Furthermore, the local smoothness of anchor pseudo-labels is preserved via a low-rank decomposition of a compact anchor similarity graph. These two components jointly ensure that spatially adjacent anchors tend to share similar cluster identities, and that samples and anchors in close proximity are also assigned to similar clusters. We develop an efficient iterative optimization algorithm to update all model variables. Extensive experiments on benchmark and synthetic datasets validate the superior performance and efficiency of the proposed method compared with state-of-the-art approaches. Code is available at https://github.com/Vince-Doit/FC2.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146133824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Matrix Completion With Deterministic Sampling Via Convex Optimization. 基于凸优化的确定性采样鲁棒矩阵补全。
IF 18.6 Pub Date : 2026-01-29 DOI: 10.1109/TPAMI.2026.3659200
Yinjian Wang, Wei Li, James E Fowler, Gemine Vivone

The problem of robust matrix completion-the recovery of a low-rank matrix and a sparse matrix from a sampling of their superposition-has been addressed extensively in prior literature. Yet, much of this work has focused exclusively on the case in which the matrix sampling is done at random, as this scenario is amenable to theoretical analysis. In contrast, sampling with an arbitrary deterministic pattern is often more accommodating to hardware implementation; consequently, the problem of robust matrix completion under deterministic sampling is considered. To this end, a restricted approximate isometry property is proposed and used, along with a modified golfing scheme and a slightly strengthened incoherence condition, to prove that the latent low-rank and sparse matrices are uniquely recoverable via convex optimization with asymptotically high probability, providing the first exact-recovery theory for robust matrix completion with arbitrary deterministic sampling. A corresponding convex-optimization algorithm, driven by a traditional nuclear norm, is developed and then subsequently generalized by substituting a convolutional nuclear norm in order to cover a broader range of application scenarios. Empirical experiments on synthetic data verify the proposed theory while a battery of results on real-world images demonstrate the practical efficacy of the generalized algorithm for robust matrix recovery.

鲁棒矩阵补全问题——从低秩矩阵和稀疏矩阵的叠加抽样中恢复它们——已经在先前的文献中得到了广泛的讨论。然而,这项工作的大部分都集中在矩阵随机抽样的情况下,因为这种情况适合于理论分析。相比之下,使用任意确定性模式的采样通常更适合硬件实现;因此,考虑了确定性采样下的鲁棒矩阵补全问题。为此,本文提出并利用一个受限的近似等距性质,结合改进的高尔夫格式和稍微增强的非相干条件,证明了潜在的低秩稀疏矩阵通过凸优化具有渐近高概率的唯一可恢复性,为任意确定性采样鲁棒矩阵补全提供了第一个精确恢复理论。在传统核范数的驱动下,提出了相应的凸优化算法,然后通过替换卷积核范数进行推广,以覆盖更广泛的应用场景。在合成数据上的经验实验验证了所提出的理论,而在真实世界图像上的一系列结果证明了广义算法对鲁棒矩阵恢复的实际有效性。
{"title":"Robust Matrix Completion With Deterministic Sampling Via Convex Optimization.","authors":"Yinjian Wang, Wei Li, James E Fowler, Gemine Vivone","doi":"10.1109/TPAMI.2026.3659200","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3659200","url":null,"abstract":"<p><p>The problem of robust matrix completion-the recovery of a low-rank matrix and a sparse matrix from a sampling of their superposition-has been addressed extensively in prior literature. Yet, much of this work has focused exclusively on the case in which the matrix sampling is done at random, as this scenario is amenable to theoretical analysis. In contrast, sampling with an arbitrary deterministic pattern is often more accommodating to hardware implementation; consequently, the problem of robust matrix completion under deterministic sampling is considered. To this end, a restricted approximate isometry property is proposed and used, along with a modified golfing scheme and a slightly strengthened incoherence condition, to prove that the latent low-rank and sparse matrices are uniquely recoverable via convex optimization with asymptotically high probability, providing the first exact-recovery theory for robust matrix completion with arbitrary deterministic sampling. A corresponding convex-optimization algorithm, driven by a traditional nuclear norm, is developed and then subsequently generalized by substituting a convolutional nuclear norm in order to cover a broader range of application scenarios. Empirical experiments on synthetic data verify the proposed theory while a battery of results on real-world images demonstrate the practical efficacy of the generalized algorithm for robust matrix recovery.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tackling Ill-Posedness of Reversible Image Conversion With Well-Posed Invertible Network. 用良定可逆网络解决可逆图像转换的病态性。
IF 18.6 Pub Date : 2026-01-29 DOI: 10.1109/TPAMI.2026.3659125
Yuanfei Huang, Hua Huang

Reversible image conversion (RIC) suffers from ill-posedness issues due to its forward conversion process being considered an underdetermined system. Despite employing invertible neural networks (INN), existing RIC methods intrinsically remain ill-posed as inevitably introducing uncertainty by incorporating randomly sampled variables. To tackle the ill-posedness dilemma, we focus on developing a reliable approximate left inverse for the underdetermined system by constructing an overdetermined system with a non-zero Gram determinant, thus ensuring a well-posed solution. Based on this principle, we propose a well-posed invertible $1times 1$ convolution (WIC), which eliminates the reliance on random variable sampling and enables the development of well-posed invertible networks. Furthermore, we design two innovative networks, WIN-Naïve and WIN, with the latter incorporating advanced skip-connections to enhance long-term memory. Our methods are evaluated across diverse RIC tasks, including reversible image hiding, image rescaling, and image decolorization, consistently achieving state-of-the-art performance. Extensive experiments validate the effectiveness of our approach, demonstrating its ability to overcome the bottlenecks of existing RIC solutions and setting a new benchmark in the field. Codes are available in https://github.com/BNU-ERC-ITEA/WIN.

可逆图像转换(RIC)由于其前向转换过程被认为是一个欠定系统,因此存在病态问题。尽管采用了可逆神经网络(INN),但现有的RIC方法本质上仍然是病态的,因为它不可避免地引入了随机抽样变量的不确定性。为了解决病态困境,我们着重于通过构造一个非零Gram行列式的过定系统来开发欠定系统的可靠近似左逆,从而确保一个适定解。基于这一原理,我们提出了一个良定可逆$1 * 1$卷积(WIC),它消除了对随机变量采样的依赖,使良定可逆网络的发展成为可能。此外,我们设计了两个创新的网络,WIN-Naïve和WIN,后者结合了先进的跳跃连接来增强长期记忆。我们的方法在不同的RIC任务中进行了评估,包括可逆图像隐藏、图像重新缩放和图像脱色,始终如一地实现了最先进的性能。大量的实验验证了我们方法的有效性,证明了它能够克服现有RIC解决方案的瓶颈,并在该领域树立了新的基准。代码可在https://github.com/BNU-ERC-ITEA/WIN上获得。
{"title":"Tackling Ill-Posedness of Reversible Image Conversion With Well-Posed Invertible Network.","authors":"Yuanfei Huang, Hua Huang","doi":"10.1109/TPAMI.2026.3659125","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3659125","url":null,"abstract":"<p><p>Reversible image conversion (RIC) suffers from ill-posedness issues due to its forward conversion process being considered an underdetermined system. Despite employing invertible neural networks (INN), existing RIC methods intrinsically remain ill-posed as inevitably introducing uncertainty by incorporating randomly sampled variables. To tackle the ill-posedness dilemma, we focus on developing a reliable approximate left inverse for the underdetermined system by constructing an overdetermined system with a non-zero Gram determinant, thus ensuring a well-posed solution. Based on this principle, we propose a well-posed invertible $1times 1$ convolution (WIC), which eliminates the reliance on random variable sampling and enables the development of well-posed invertible networks. Furthermore, we design two innovative networks, WIN-Naïve and WIN, with the latter incorporating advanced skip-connections to enhance long-term memory. Our methods are evaluated across diverse RIC tasks, including reversible image hiding, image rescaling, and image decolorization, consistently achieving state-of-the-art performance. Extensive experiments validate the effectiveness of our approach, demonstrating its ability to overcome the bottlenecks of existing RIC solutions and setting a new benchmark in the field. Codes are available in https://github.com/BNU-ERC-ITEA/WIN.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deeply Learned Robust Matrix Completion for Large-scale Low-rank Data Recovery. 大规模低秩数据恢复的深度学习鲁棒矩阵补全。
IF 18.6 Pub Date : 2026-01-29 DOI: 10.1109/TPAMI.2026.3659041
HanQin Cai, Chandra Kundu, Jialin Liu, Wotao Yin

Robust matrix completion (RMC) is a widely used machine learning tool that simultaneously tackles two critical issues in low-rank data analysis: missing data entries and extreme outliers. This paper proposes a novel scalable and learnable non-convex approach, coined Learned Robust Matrix Completion (LRMC), for large-scale RMC problems. LRMC enjoys low computational complexity with linear convergence. Motivated by the proposed theorem, the free parameters of LRMC can be effectively learned via deep unfolding to achieve optimum performance. Furthermore, this paper proposes a flexible feedforward-recurrent-mixed neural network framework that extends deep unfolding from fixed-number iterations to infinite iterations. The superior empirical performance of LRMC is verified with extensive experiments against state-of-the-art on synthetic datasets and real applications, including video background subtraction, ultrasound imaging, face modeling, and cloud removal from satellite imagery.

鲁棒矩阵补全(RMC)是一种广泛使用的机器学习工具,它同时解决了低秩数据分析中的两个关键问题:缺失数据条目和极端异常值。针对大规模矩阵补全问题,提出了一种新的可扩展、可学习的非凸方法——学习鲁棒矩阵补全(LRMC)。LRMC计算复杂度低,具有线性收敛性。在该定理的激励下,LRMC的自由参数可以通过深度展开有效地学习,从而达到最优性能。在此基础上,提出了一种柔性前馈-递归-混合神经网络框架,将深度展开从固定迭代扩展到无限迭代。LRMC卓越的经验性能通过对最先进的合成数据集和实际应用的广泛实验得到验证,包括视频背景减去、超声成像、人脸建模和从卫星图像中去除云。
{"title":"Deeply Learned Robust Matrix Completion for Large-scale Low-rank Data Recovery.","authors":"HanQin Cai, Chandra Kundu, Jialin Liu, Wotao Yin","doi":"10.1109/TPAMI.2026.3659041","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3659041","url":null,"abstract":"<p><p>Robust matrix completion (RMC) is a widely used machine learning tool that simultaneously tackles two critical issues in low-rank data analysis: missing data entries and extreme outliers. This paper proposes a novel scalable and learnable non-convex approach, coined Learned Robust Matrix Completion (LRMC), for large-scale RMC problems. LRMC enjoys low computational complexity with linear convergence. Motivated by the proposed theorem, the free parameters of LRMC can be effectively learned via deep unfolding to achieve optimum performance. Furthermore, this paper proposes a flexible feedforward-recurrent-mixed neural network framework that extends deep unfolding from fixed-number iterations to infinite iterations. The superior empirical performance of LRMC is verified with extensive experiments against state-of-the-art on synthetic datasets and real applications, including video background subtraction, ultrasound imaging, face modeling, and cloud removal from satellite imagery.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEGA: A Transferable Signed Ensemble Gaussian Black-Box Attack Against No-Reference Image Quality Assessment Models. SEGA:针对无参考图像质量评估模型的可转移签名集成高斯黑盒攻击。
IF 18.6 Pub Date : 2026-01-29 DOI: 10.1109/TPAMI.2026.3659164
Yujia Liu, Dingquan Li, Zhixuan Li, Tiejun Huang

No-Reference Image Quality Assessment (NR-IQA) models play an important role in various real-world applications. Recently, adversarial attacks against NR-IQA models have attracted increasing attention, as they provide valuable insights for revealing model vulnerabilities and guiding robust system design. Some effective attacks have been proposed against NR-IQA models in white-box settings, where the attacker has full access to the target model. However, these attacks often suffer from poor transferability to unknown target models in more realistic black-box scenarios, where the target model is inaccessible. This work makes the first attempt to address the challenge of low transferability in attacking NR-IQA models by proposing a transferable Signed Ensemble Gaussian black-box Attack (SEGA). The main idea is to approximate the gradient of the target model by applying Gaussian smoothing to source models and ensembling their smoothed gradients. To ensure the imperceptibility of adversarial perturbations, SEGA further removes inappropriate perturbations using a specially designed perturbation filter mask. Experimental results demonstrate the superior transferability of SEGA, validating its effectiveness in enabling successful transfer-based black-box attacks against NR-IQA models. Code for this paper is available at https://github.com/YogaLYJ/SEGA_IQA.

无参考图像质量评估(NR-IQA)模型在各种实际应用中发挥着重要作用。最近,针对NR-IQA模型的对抗性攻击引起了越来越多的关注,因为它们为揭示模型漏洞和指导健壮的系统设计提供了有价值的见解。在白盒设置中,针对NR-IQA模型提出了一些有效的攻击,攻击者可以完全访问目标模型。然而,在更现实的黑盒场景中,这些攻击往往难以转移到未知的目标模型,因为目标模型是不可访问的。这项工作首次尝试通过提出一种可转移的签名集成高斯黑盒攻击(SEGA)来解决攻击NR-IQA模型的低可转移性挑战。其主要思想是通过对源模型应用高斯平滑并集成其平滑梯度来近似目标模型的梯度。为了确保对抗性扰动的不可感知性,SEGA进一步使用特殊设计的扰动滤波掩膜去除不适当的扰动。实验结果证明了SEGA优越的可转移性,验证了其在针对NR-IQA模型的成功基于转移的黑盒攻击中的有效性。本文的代码可从https://github.com/YogaLYJ/SEGA_IQA获得。
{"title":"SEGA: A Transferable Signed Ensemble Gaussian Black-Box Attack Against No-Reference Image Quality Assessment Models.","authors":"Yujia Liu, Dingquan Li, Zhixuan Li, Tiejun Huang","doi":"10.1109/TPAMI.2026.3659164","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3659164","url":null,"abstract":"<p><p>No-Reference Image Quality Assessment (NR-IQA) models play an important role in various real-world applications. Recently, adversarial attacks against NR-IQA models have attracted increasing attention, as they provide valuable insights for revealing model vulnerabilities and guiding robust system design. Some effective attacks have been proposed against NR-IQA models in white-box settings, where the attacker has full access to the target model. However, these attacks often suffer from poor transferability to unknown target models in more realistic black-box scenarios, where the target model is inaccessible. This work makes the first attempt to address the challenge of low transferability in attacking NR-IQA models by proposing a transferable Signed Ensemble Gaussian black-box Attack (SEGA). The main idea is to approximate the gradient of the target model by applying Gaussian smoothing to source models and ensembling their smoothed gradients. To ensure the imperceptibility of adversarial perturbations, SEGA further removes inappropriate perturbations using a specially designed perturbation filter mask. Experimental results demonstrate the superior transferability of SEGA, validating its effectiveness in enabling successful transfer-based black-box attacks against NR-IQA models. Code for this paper is available at https://github.com/YogaLYJ/SEGA_IQA.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving Model Transcription With Differentially Private Synthetic Distillation. 差分私密合成蒸馏的隐私保护模型转录。
IF 18.6 Pub Date : 2026-01-29 DOI: 10.1109/TPAMI.2026.3659110
Bochao Liu, Shiming Ge, Pengju Wang, Shikun Li, Tongliang Liu

While many deep learning models trained on private datasets have been deployed in various practical tasks, they may pose a privacy leakage risk as attackers could recover informative data or label knowledge from models. In this work, we present privacy-preserving model transcription, a data-free model-to-model conversion solution to facilitate model deployment with a privacy guarantee. To this end, we propose a cooperative-competitive learning approach termed differentially private synthetic distillation that learns to convert a pretrained model (teacher) into its privacy-preserving counterpart (student) via a trainable generator without access to private data. The learning collaborates with three players in a unified framework and performs alternate optimization: i) the generator is learned to generate synthetic data, ii) the teacher and student accept the synthetic data and compute differential private labels by flexible data or label noisy perturbation, and iii) the student is updated with noisy labels and the generator is updated by taking the student as a discriminator for adversarial training. We theoretically prove that our approach can guarantee differential privacy and convergence. The transcribed student has good performance and privacy protection, while the resulting generator can generate private synthetic data for downstream tasks. Extensive experiments clearly demonstrate that our approach outperforms 26 state-of-the-arts.

虽然许多在私有数据集上训练的深度学习模型已经部署在各种实际任务中,但它们可能会带来隐私泄露风险,因为攻击者可以从模型中恢复信息数据或标签知识。在这项工作中,我们提出了保护隐私的模型转录,这是一种无数据的模型到模型转换解决方案,可以在保证隐私的情况下促进模型部署。为此,我们提出了一种称为差异私有合成蒸馏的合作-竞争学习方法,该方法通过无需访问私有数据的可训练生成器学习将预训练模型(教师)转换为其隐私保护对应模型(学生)。该学习与三个参与者在一个统一的框架内协作,并进行交替优化:1)学习生成器生成合成数据,2)教师和学生接受合成数据,并通过柔性数据或标签噪声扰动计算微分私有标签,3)用噪声标签更新学生,并将学生作为判别器进行对抗性训练来更新生成器。我们从理论上证明了我们的方法可以保证差分隐私和收敛性。转录的学生具有良好的性能和隐私保护,而生成的生成器可以为下游任务生成私有合成数据。大量的实验清楚地表明,我们的方法优于26个最先进的。
{"title":"Privacy-Preserving Model Transcription With Differentially Private Synthetic Distillation.","authors":"Bochao Liu, Shiming Ge, Pengju Wang, Shikun Li, Tongliang Liu","doi":"10.1109/TPAMI.2026.3659110","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3659110","url":null,"abstract":"<p><p>While many deep learning models trained on private datasets have been deployed in various practical tasks, they may pose a privacy leakage risk as attackers could recover informative data or label knowledge from models. In this work, we present privacy-preserving model transcription, a data-free model-to-model conversion solution to facilitate model deployment with a privacy guarantee. To this end, we propose a cooperative-competitive learning approach termed differentially private synthetic distillation that learns to convert a pretrained model (teacher) into its privacy-preserving counterpart (student) via a trainable generator without access to private data. The learning collaborates with three players in a unified framework and performs alternate optimization: i) the generator is learned to generate synthetic data, ii) the teacher and student accept the synthetic data and compute differential private labels by flexible data or label noisy perturbation, and iii) the student is updated with noisy labels and the generator is updated by taking the student as a discriminator for adversarial training. We theoretically prove that our approach can guarantee differential privacy and convergence. The transcribed student has good performance and privacy protection, while the resulting generator can generate private synthetic data for downstream tasks. Extensive experiments clearly demonstrate that our approach outperforms 26 state-of-the-arts.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Imitation Learning with General Function Approximation: Theoretical Analysis and Practical Algorithms. 基于一般函数逼近的对抗模仿学习:理论分析与实用算法。
IF 18.6 Pub Date : 2026-01-26 DOI: 10.1109/TPAMI.2026.3657578
Tian Xu, Zhilong Zhang, Zexuan Chen, Ruishuo Chen, Yihao Sun, Yang Yu

Adversarial imitation learning (AIL), a prominent approach in imitation learning, has achieved significant practical success powered by neural network approximation. However, existing theoretical analyses of AIL are primarily confined to simplified settings-such as tabular and linear function approximation-and involve complex algorithmic designs that impede practical implementation. This creates a substantial gap between theory and practice. This paper bridges this gap by exploring the theoretical underpinnings of online AIL with general function approximation. We introduce a novel framework called optimization-based AIL (OPT-AIL), which performs online optimization for reward learning coupled with optimism-regularized optimization for policy learning. Within this framework, we develop two concrete methods: model-free OPT-AIL and model-based OPT-AIL. Our theoretical analysis demonstrates that both variants achieve polynomial expert sample complexity and interaction complexity for learning near-expert policies. To the best of our knowledge, they represent the first provably efficient AIL methods under general function approximation. From a practical standpoint, OPT-AIL requires only the approximate optimization of two objectives, thereby facilitating practical implementation. Empirical studies demonstrate that OPT-AIL outperforms previous state-of-the-art deep AIL methods across several challenging tasks.

对抗性模仿学习(AIL)是模仿学习中的一种重要方法,在神经网络逼近的支持下取得了重大的实践成功。然而,现有的理论分析主要局限于简化的设置,如表格和线性函数近似,并涉及复杂的算法设计,阻碍了实际实施。这就造成了理论与实践之间的巨大差距。本文通过探索具有一般函数近似的在线人工智能的理论基础来弥补这一差距。我们引入了一种新的框架,称为基于优化的AIL (OPT-AIL),它对奖励学习进行在线优化,并对策略学习进行乐观正则化优化。在此框架下,我们开发了两种具体方法:无模型OPT-AIL和基于模型的OPT-AIL。我们的理论分析表明,这两种变体在学习近专家策略时都达到了多项式的专家样本复杂度和交互复杂度。据我们所知,它们代表了第一个在一般函数近似下被证明有效的AIL方法。从实际的角度来看,OPT-AIL只需要对两个目标进行近似优化,从而便于实际实施。实证研究表明,OPT-AIL在几个具有挑战性的任务中优于以前最先进的深度AIL方法。
{"title":"Adversarial Imitation Learning with General Function Approximation: Theoretical Analysis and Practical Algorithms.","authors":"Tian Xu, Zhilong Zhang, Zexuan Chen, Ruishuo Chen, Yihao Sun, Yang Yu","doi":"10.1109/TPAMI.2026.3657578","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3657578","url":null,"abstract":"<p><p>Adversarial imitation learning (AIL), a prominent approach in imitation learning, has achieved significant practical success powered by neural network approximation. However, existing theoretical analyses of AIL are primarily confined to simplified settings-such as tabular and linear function approximation-and involve complex algorithmic designs that impede practical implementation. This creates a substantial gap between theory and practice. This paper bridges this gap by exploring the theoretical underpinnings of online AIL with general function approximation. We introduce a novel framework called optimization-based AIL (OPT-AIL), which performs online optimization for reward learning coupled with optimism-regularized optimization for policy learning. Within this framework, we develop two concrete methods: model-free OPT-AIL and model-based OPT-AIL. Our theoretical analysis demonstrates that both variants achieve polynomial expert sample complexity and interaction complexity for learning near-expert policies. To the best of our knowledge, they represent the first provably efficient AIL methods under general function approximation. From a practical standpoint, OPT-AIL requires only the approximate optimization of two objectives, thereby facilitating practical implementation. Empirical studies demonstrate that OPT-AIL outperforms previous state-of-the-art deep AIL methods across several challenging tasks.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment. 预训练语言模型的参数有效微调方法:一个重要的回顾和评估。
IF 18.6 Pub Date : 2026-01-26 DOI: 10.1109/TPAMI.2026.3657354
Lingling Xu, Haoran Xie, S Joe Qin, Xiaohui Tao, Fu Lee Wang

With the continuous growth in the number of parameters of the Transformer-based pretrained language models (PLMs), particularly the emergence of large language models (LLMs) with billions of parameters, many natural language processing (NLP) tasks have demonstrated remarkable success. However, the enormous size and computational demands of these models pose significant challenges for adapting them to specific downstream tasks, especially in environments with limited computational resources. Parameter-Efficient Fine-Tuning (PEFT) offers an effective solution by reducing the number of fine-tuning parameters and memory usage while achieving comparable performance to full fine-tuning. The demands for fine-tuning PLMs, especially LLMs, have led to a surge in the development of PEFT methods, as depicted in Fig. 1. In this paper, we present a comprehensive and systematic review of PEFT methods for PLMs. We summarize these PEFT methods, discuss their applications, and outline future directions. Furthermore, extensive experiments are conducted using several representative PEFT methods to better understand their effectiveness in parameter efficiency and memory efficiency. By offering insights into the latest advancements and practical applications, this survey serves as an invaluable resource for researchers and practitioners seeking to navigate the challenges and opportunities presented by PEFT in the context of PLMs.

随着基于transformer的预训练语言模型(plm)参数数量的持续增长,特别是具有数十亿参数的大型语言模型(llm)的出现,许多自然语言处理(NLP)任务已经取得了显着的成功。然而,这些模型的巨大尺寸和计算需求给它们适应特定的下游任务带来了重大挑战,特别是在计算资源有限的环境中。参数高效微调(PEFT)提供了一种有效的解决方案,它减少了微调参数的数量和内存使用,同时实现了与完全微调相当的性能。对微调plm的需求,特别是llm,导致了PEFT方法发展的激增,如图1所示。在本文中,我们提出了一个全面和系统的评价PEFT方法的plm。我们总结了这些PEFT方法,讨论了它们的应用,并概述了未来的发展方向。此外,为了更好地了解PEFT方法在参数效率和存储效率方面的有效性,还使用了几种具有代表性的PEFT方法进行了大量的实验。通过提供对最新进展和实际应用的见解,该调查为研究人员和从业者提供了宝贵的资源,帮助他们在PLMs背景下应对PEFT所带来的挑战和机遇。
{"title":"Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment.","authors":"Lingling Xu, Haoran Xie, S Joe Qin, Xiaohui Tao, Fu Lee Wang","doi":"10.1109/TPAMI.2026.3657354","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3657354","url":null,"abstract":"<p><p>With the continuous growth in the number of parameters of the Transformer-based pretrained language models (PLMs), particularly the emergence of large language models (LLMs) with billions of parameters, many natural language processing (NLP) tasks have demonstrated remarkable success. However, the enormous size and computational demands of these models pose significant challenges for adapting them to specific downstream tasks, especially in environments with limited computational resources. Parameter-Efficient Fine-Tuning (PEFT) offers an effective solution by reducing the number of fine-tuning parameters and memory usage while achieving comparable performance to full fine-tuning. The demands for fine-tuning PLMs, especially LLMs, have led to a surge in the development of PEFT methods, as depicted in Fig. 1. In this paper, we present a comprehensive and systematic review of PEFT methods for PLMs. We summarize these PEFT methods, discuss their applications, and outline future directions. Furthermore, extensive experiments are conducted using several representative PEFT methods to better understand their effectiveness in parameter efficiency and memory efficiency. By offering insights into the latest advancements and practical applications, this survey serves as an invaluable resource for researchers and practitioners seeking to navigate the challenges and opportunities presented by PEFT in the context of PLMs.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1