首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
Survey of Computerized Adaptive Testing: a Machine Learning Perspective. 计算机化自适应测试综述:机器学习视角。
IF 18.6 Pub Date : 2026-03-10 DOI: 10.1109/TPAMI.2026.3672850
Yan Zhuang, Qi Liu, Haoyang Bi, Zhenya Huang, Weizhe Huang, Jiatong Li, Junhao Yu, Zirui Liu, Zirui Hu, Yuting Hong, Zachary A Pardos, Haiping Ma, Mengxiao Zhu, Shijin Wang, Enhong Chen

Computerized Adaptive Testing (CAT) offers an efficient and personalized method for assessing examinee proficiency by dynamically adjusting test questions based on individual performance. Compared to traditional, non-personalized testing methods, CAT requires fewer questions and provides more accurate assessments. As a result, CAT has been widely adopted across various fields, including education, healthcare, sports, sociology, and the evaluation of AI models. While traditional methods rely on psychometrics and statistics, the increasing complexity of large-scale testing has spurred the integration of machine learning techniques. This paper aims to provide a machine learning-focused survey on CAT, presenting a fresh perspective on this adaptive testing paradigm. We delve into measurement models, question selection algorithm, bank construction, and test control within CAT, exploring how machine learning can optimize these components. Through an analysis of current methods, strengths, limitations, and challenges, we strive to develop robust, fair, and efficient CAT systems. By bridging psychometric-driven CAT research with machine learning, this survey advocates for a more inclusive and interdisciplinary approach to the future of adaptive testing.

计算机自适应考试(CAT)提供了一种有效的、个性化的方法来评估考生的能力,它根据个人的表现动态调整试题。与传统的非个性化测试方法相比,CAT需要更少的问题,提供更准确的评估。因此,CAT已被广泛应用于各个领域,包括教育、医疗保健、体育、社会学和人工智能模型的评估。虽然传统方法依赖于心理测量学和统计学,但大规模测试的日益复杂刺激了机器学习技术的整合。本文旨在提供一项以机器学习为重点的CAT调查,为这种自适应测试范式提供一个新的视角。我们深入研究了CAT中的测量模型、问题选择算法、库构建和测试控制,探索了机器学习如何优化这些组件。通过对当前方法、优势、限制和挑战的分析,我们努力开发健壮、公平和高效的CAT系统。通过将心理测量驱动的CAT研究与机器学习相结合,本调查倡导对适应性测试的未来采取更具包容性和跨学科的方法。
{"title":"Survey of Computerized Adaptive Testing: a Machine Learning Perspective.","authors":"Yan Zhuang, Qi Liu, Haoyang Bi, Zhenya Huang, Weizhe Huang, Jiatong Li, Junhao Yu, Zirui Liu, Zirui Hu, Yuting Hong, Zachary A Pardos, Haiping Ma, Mengxiao Zhu, Shijin Wang, Enhong Chen","doi":"10.1109/TPAMI.2026.3672850","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3672850","url":null,"abstract":"<p><p>Computerized Adaptive Testing (CAT) offers an efficient and personalized method for assessing examinee proficiency by dynamically adjusting test questions based on individual performance. Compared to traditional, non-personalized testing methods, CAT requires fewer questions and provides more accurate assessments. As a result, CAT has been widely adopted across various fields, including education, healthcare, sports, sociology, and the evaluation of AI models. While traditional methods rely on psychometrics and statistics, the increasing complexity of large-scale testing has spurred the integration of machine learning techniques. This paper aims to provide a machine learning-focused survey on CAT, presenting a fresh perspective on this adaptive testing paradigm. We delve into measurement models, question selection algorithm, bank construction, and test control within CAT, exploring how machine learning can optimize these components. Through an analysis of current methods, strengths, limitations, and challenges, we strive to develop robust, fair, and efficient CAT systems. By bridging psychometric-driven CAT research with machine learning, this survey advocates for a more inclusive and interdisciplinary approach to the future of adaptive testing.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variational Bayesian Personalized Ranking. 变分贝叶斯个性化排名。
IF 18.6 Pub Date : 2026-03-10 DOI: 10.1109/TPAMI.2026.3672705
Bin Liu, Xiaohong Liu, Qin Luo, Ziqiao Shang, Jielei Chu, Lin Ma, Zhaoyu Li, Fei Teng, Guangtao Zhai, Tianrui Li

Pairwise learning underpins implicit collaborative filtering, yet its effectiveness is often hindered by sparse supervision, noisy interactions, and popularity-driven exposure bias. In this paper, we propose Variational Bayesian Personalized Ranking (VarBPR), a tractable variational framework for implicit-feedback pairwise learning that offers principled exposure controllability and theoretical interpretability. VarBPR reformulates pairwise learning as variational inference over discrete latent indexing variables, explicitly modeling noise and indexing uncertainty, and divides training into two stages: variational inference, which solve variational posteriors, and variational learning, which updates model parameters based on these posteriors. In the variational inference stage, we develop a variational formulation that integrates preference alignment, denoising, and popularity debiasing under a unified ELBO/regularization objective, deriving closed-form posteriors with clear control semantics: the prior encodes a target exposure pattern, while temperature/regularization strength controls posterior-prior adherence. As a result, exposure controllability becomes an endogenous and interpretable outcome of variational inference. In the variational learning stage, we propose a posterior-compression objective that reduces the ideal ELBO's computational complexity from polynomial to linear, with the approximation justified by an explicit Jensen-gap upper bound. Theoretically, we provide interpretable generalization guarantees by identifying a structural error component and revealing the opportunity cost of prioritizing certain exposure patterns (e.g., long-tail), offering a concrete analytical lens for designing controllable recommender systems. Empirically, We validate VarBPR across popular backbones; it demonstrates consistent gains in ranking accuracy, enables controlled long-tail exposure, and preserves the linear-time complexity of BPR.

两两学习支持隐式协同过滤,但其有效性经常受到稀疏监督、嘈杂互动和受欢迎程度驱动的曝光偏见的阻碍。在本文中,我们提出变分贝叶斯个性化排名(VarBPR),这是一种可处理的变分框架,用于隐式反馈两两学习,提供原则性的暴露可控性和理论可解释性。VarBPR将两两学习重新表述为对离散潜在索引变量的变分推理,明确建模噪声和索引不确定性,并将训练分为两个阶段:解决变分后验的变分推理和基于这些后验更新模型参数的变分学习。在变分推理阶段,我们开发了一种变分公式,该公式在统一的ELBO/正则化目标下集成了偏好对齐、去噪和流行度去偏,得出具有明确控制语义的封闭式后验:先验编码目标暴露模式,而温度/正则化强度控制后先验依从性。因此,暴露可控性成为变分推理的内生和可解释的结果。在变分学习阶段,我们提出了一个后验压缩目标,将理想ELBO的计算复杂度从多项式降低到线性,并通过显式Jensen-gap上界证明了近似的合理性。从理论上讲,我们通过识别结构错误成分和揭示优先考虑某些暴露模式(如长尾)的机会成本,提供了可解释的泛化保证,为设计可控推荐系统提供了具体的分析视角。根据经验,我们在流行的主干上验证了VarBPR;它展示了排名准确性的持续提高,支持可控的长尾暴露,并保持了BPR的线性时间复杂性。
{"title":"Variational Bayesian Personalized Ranking.","authors":"Bin Liu, Xiaohong Liu, Qin Luo, Ziqiao Shang, Jielei Chu, Lin Ma, Zhaoyu Li, Fei Teng, Guangtao Zhai, Tianrui Li","doi":"10.1109/TPAMI.2026.3672705","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3672705","url":null,"abstract":"<p><p>Pairwise learning underpins implicit collaborative filtering, yet its effectiveness is often hindered by sparse supervision, noisy interactions, and popularity-driven exposure bias. In this paper, we propose Variational Bayesian Personalized Ranking (VarBPR), a tractable variational framework for implicit-feedback pairwise learning that offers principled exposure controllability and theoretical interpretability. VarBPR reformulates pairwise learning as variational inference over discrete latent indexing variables, explicitly modeling noise and indexing uncertainty, and divides training into two stages: variational inference, which solve variational posteriors, and variational learning, which updates model parameters based on these posteriors. In the variational inference stage, we develop a variational formulation that integrates preference alignment, denoising, and popularity debiasing under a unified ELBO/regularization objective, deriving closed-form posteriors with clear control semantics: the prior encodes a target exposure pattern, while temperature/regularization strength controls posterior-prior adherence. As a result, exposure controllability becomes an endogenous and interpretable outcome of variational inference. In the variational learning stage, we propose a posterior-compression objective that reduces the ideal ELBO's computational complexity from polynomial to linear, with the approximation justified by an explicit Jensen-gap upper bound. Theoretically, we provide interpretable generalization guarantees by identifying a structural error component and revealing the opportunity cost of prioritizing certain exposure patterns (e.g., long-tail), offering a concrete analytical lens for designing controllable recommender systems. Empirically, We validate VarBPR across popular backbones; it demonstrates consistent gains in ranking accuracy, enables controlled long-tail exposure, and preserves the linear-time complexity of BPR.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2025 Reviewers List* 2025审稿人名单*
IF 18.6 Pub Date : 2026-03-06 DOI: 10.1109/TPAMI.2026.3660248
{"title":"2025 Reviewers List*","authors":"","doi":"10.1109/TPAMI.2026.3660248","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3660248","url":null,"abstract":"","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 4","pages":"4961-4982"},"PeriodicalIF":18.6,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11424259","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Robust Reversible Watermarking. 深度鲁棒可逆水印。
IF 18.6 Pub Date : 2026-03-06 DOI: 10.1109/TPAMI.2026.3670969
Jiale Chen, Wei Wang, Chongyang Shi, Li Dong, Yuanman Li, Xiping Hu

Robust Reversible Watermarking (RRW) enables perfect recovery of cover images and watermarks in lossless channels while ensuring robust watermark extraction under lossy channels. However, existing RRW methods, mostly non-deep learning-based, suffer from complex designs, high computational costs, and poor robustness limiting their practical applications. To address these issues, this paper proposes Deep Robust Reversible Watermarking (DRRW), a deep learning-based RRW scheme. DRRW introduces an Integer Invertible Watermark Network (iIWN) to achieve an invertible mapping between integer data distributions, fundamentally addressing the limitations of conventional RRW approaches. Unlike traditional RRW methods requiring task-specific designs for different distortions, DRRW adopts an encoder-noise layer-decoder framework, enabling adaptive robustness against various distortions through end-to-end training. During inference, the cover image and watermark are mapped into an overflowed stego image and latent variables. Arithmetic coding efficiently compresses these into a compact bitstream, which is embedded via reversible data hiding to ensure lossless recovery of both the image and watermark. To reduce pixel overflow, we introduce an overflow penalty loss, significantly shortening the auxiliary bitstream while improving both robustness and stego image quality. Additionally, we propose an adaptive weight adjustment strategy that eliminates the need to manually preset the watermark loss weight, ensuring improved training stability and performance. Experiments on multiple datasets demonstrate that the proposed DRRW addresses key challenges in current RRW methods and significantly advances the practical deployment of RRW. The source code is available at https://github.com/chenoly/Deep-Robust-Reversible-Watermark.

鲁棒可逆水印(RRW)能够在无损通道中完美恢复封面图像和水印,同时确保在有损通道下进行鲁棒水印提取。然而,现有的RRW方法大多是非深度学习的,设计复杂,计算成本高,鲁棒性差,限制了它们的实际应用。为了解决这些问题,本文提出了基于深度学习的深度鲁棒可逆水印(DRRW)方案。DRRW引入了整数可逆水印网络(iIWN)来实现整数数据分布之间的可逆映射,从根本上解决了传统RRW方法的局限性。不同于传统的RRW方法需要针对不同的失真进行特定任务的设计,DRRW采用编码器-噪声层-解码器框架,通过端到端训练实现对各种失真的自适应鲁棒性。在推理过程中,封面图像和水印被映射成一个溢出的隐写图像和潜在变量。算术编码有效地将这些压缩成一个紧凑的比特流,该比特流通过可逆数据隐藏嵌入,以确保图像和水印的无损恢复。为了减少像素溢出,我们引入了溢出惩罚损失,大大缩短了辅助比特流,同时提高了鲁棒性和隐进图像质量。此外,我们提出了一种自适应权值调整策略,消除了手动预设水印损失权值的需要,从而保证了训练的稳定性和性能。在多个数据集上的实验表明,该方法解决了当前RRW方法中的关键挑战,并显著推进了RRW的实际部署。源代码可从https://github.com/chenoly/Deep-Robust-Reversible-Watermark获得。
{"title":"Deep Robust Reversible Watermarking.","authors":"Jiale Chen, Wei Wang, Chongyang Shi, Li Dong, Yuanman Li, Xiping Hu","doi":"10.1109/TPAMI.2026.3670969","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3670969","url":null,"abstract":"<p><p>Robust Reversible Watermarking (RRW) enables perfect recovery of cover images and watermarks in lossless channels while ensuring robust watermark extraction under lossy channels. However, existing RRW methods, mostly non-deep learning-based, suffer from complex designs, high computational costs, and poor robustness limiting their practical applications. To address these issues, this paper proposes Deep Robust Reversible Watermarking (DRRW), a deep learning-based RRW scheme. DRRW introduces an Integer Invertible Watermark Network (iIWN) to achieve an invertible mapping between integer data distributions, fundamentally addressing the limitations of conventional RRW approaches. Unlike traditional RRW methods requiring task-specific designs for different distortions, DRRW adopts an encoder-noise layer-decoder framework, enabling adaptive robustness against various distortions through end-to-end training. During inference, the cover image and watermark are mapped into an overflowed stego image and latent variables. Arithmetic coding efficiently compresses these into a compact bitstream, which is embedded via reversible data hiding to ensure lossless recovery of both the image and watermark. To reduce pixel overflow, we introduce an overflow penalty loss, significantly shortening the auxiliary bitstream while improving both robustness and stego image quality. Additionally, we propose an adaptive weight adjustment strategy that eliminates the need to manually preset the watermark loss weight, ensuring improved training stability and performance. Experiments on multiple datasets demonstrate that the proposed DRRW addresses key challenges in current RRW methods and significantly advances the practical deployment of RRW. The source code is available at https://github.com/chenoly/Deep-Robust-Reversible-Watermark.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147370592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective. 持续学习的参数有效微调:神经切线核视角。
IF 18.6 Pub Date : 2026-03-06 DOI: 10.1109/TPAMI.2026.3670952
Jingren Liu, Zhong Ji, YunLong Yu, Jiale Cao, Yanwei Pang, Jungong Han, Xuelong Li

Parameter-efficient fine-tuning for continual learning (PEFT-CL) has shown promise in adapting pre-trained models to sequential tasks while mitigating catastrophic forgetting problem. However, understanding the mechanisms that dictate continual performance in this paradigm remains elusive. To unravel this mystery, we undertake a rigorous analysis of PEFT-CL dynamics to derive relevant metrics for continual scenarios using Neural Tangent Kernel (NTK) theory. With the aid of NTK as a mathematical analysis tool, we recast the challenge of test-time forgetting into the quantifiable generalization gaps during training, identifying three key factors that influence these gaps and the performance of PEFT-CL: training sample size, task-level feature orthogonality, and regularization. To address these challenges, we introduce NTK-CL, a novel framework that eliminates task-specific parameter storage while adaptively generating task-relevant features. Aligning with theoretical guidance, NTK-CL triples the feature representation of each sample, theoretically and empirically reducing the magnitude of both task-interplay and task-specific generalization gaps. Grounded in NTK analysis, our framework imposes an adaptive exponential moving average mechanism and constraints on task-level feature orthogonality, maintaining intra-task NTK forms while attenuating inter-task NTK forms. Ultimately, by fine-tuning optimizable parameters with appropriate regularization, NTK-CL achieves state-of-the-art performance on established PEFT-CL benchmarks. This work provides a theoretical foundation for understanding and improving PEFT-CL models, offering insights into the interplay between feature representation, task orthogonality, and generalization, contributing to the development of more efficient continual learning systems.

持续学习的参数有效微调(PEFT-CL)在使预训练模型适应顺序任务同时减轻灾难性遗忘问题方面显示出了希望。然而,理解在这种范式中决定持续性能的机制仍然是难以捉摸的。为了解开这个谜团,我们对PEFT-CL动力学进行了严格的分析,以使用神经切线核(NTK)理论推导出连续场景的相关指标。借助NTK作为数学分析工具,我们将测试时间遗忘的挑战重新定义为训练过程中可量化的泛化差距,确定了影响这些差距和PEFT-CL性能的三个关键因素:训练样本量、任务级特征正交性和正则化。为了应对这些挑战,我们引入了NTK-CL,这是一个新颖的框架,消除了特定于任务的参数存储,同时自适应地生成与任务相关的功能。与理论指导一致,NTK-CL将每个样本的特征表示增加了三倍,从理论上和经验上降低了任务相互作用和任务特定泛化差距的大小。基于NTK分析,我们的框架施加了自适应指数移动平均机制和任务级特征正交性约束,维持任务内的NTK形式,同时减弱任务间的NTK形式。最终,通过微调具有适当正则化的可优化参数,NTK-CL在已建立的PEFT-CL基准上实现了最先进的性能。这项工作为理解和改进PEFT-CL模型提供了理论基础,提供了对特征表示、任务正交性和泛化之间相互作用的见解,有助于开发更有效的持续学习系统。
{"title":"Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective.","authors":"Jingren Liu, Zhong Ji, YunLong Yu, Jiale Cao, Yanwei Pang, Jungong Han, Xuelong Li","doi":"10.1109/TPAMI.2026.3670952","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3670952","url":null,"abstract":"<p><p>Parameter-efficient fine-tuning for continual learning (PEFT-CL) has shown promise in adapting pre-trained models to sequential tasks while mitigating catastrophic forgetting problem. However, understanding the mechanisms that dictate continual performance in this paradigm remains elusive. To unravel this mystery, we undertake a rigorous analysis of PEFT-CL dynamics to derive relevant metrics for continual scenarios using Neural Tangent Kernel (NTK) theory. With the aid of NTK as a mathematical analysis tool, we recast the challenge of test-time forgetting into the quantifiable generalization gaps during training, identifying three key factors that influence these gaps and the performance of PEFT-CL: training sample size, task-level feature orthogonality, and regularization. To address these challenges, we introduce NTK-CL, a novel framework that eliminates task-specific parameter storage while adaptively generating task-relevant features. Aligning with theoretical guidance, NTK-CL triples the feature representation of each sample, theoretically and empirically reducing the magnitude of both task-interplay and task-specific generalization gaps. Grounded in NTK analysis, our framework imposes an adaptive exponential moving average mechanism and constraints on task-level feature orthogonality, maintaining intra-task NTK forms while attenuating inter-task NTK forms. Ultimately, by fine-tuning optimizable parameters with appropriate regularization, NTK-CL achieves state-of-the-art performance on established PEFT-CL benchmarks. This work provides a theoretical foundation for understanding and improving PEFT-CL models, offering insights into the interplay between feature representation, task orthogonality, and generalization, contributing to the development of more efficient continual learning systems.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147370660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Transferable Defense Against Malicious Image Edits. 对恶意图像编辑的可转移防御。
IF 18.6 Pub Date : 2026-03-04 DOI: 10.1109/TPAMI.2026.3670292
Jie Zhang, Shuai Dong, Shiguang Shan, Xilin Chen

Recent approaches employing imperceptible perturbations in input images have demonstrated promising potential to counter malicious manipulations in diffusion-based image editing systems. However, existing methods suffer from limited transferability in cross-model evaluations. To address this, we propose Transferable Defense Against Malicious Image Edits (TDAE), a novel bimodal framework that enhances image immunity against malicious edits through coordinated image-text optimization. Specifically, at the visual defense level, we introduce FlatGrad Defense Mechanism (FDM), which incorporates gradient regularization into the adversarial objective. By explicitly steering the perturbations toward flat minima, FDM amplifies immune robustness against unseen editing models. For textual enhancement protection, we propose an adversarial optimization paradigm named Dynamic Prompt Defense (DPD), which periodically refines text embeddings to align the editing outcomes of immunized images with those of the original images, then updates the images under optimized embeddings. Through iterative adversarial updates to diverse embeddings, DPD enforces the generation of immunized images that seek a broader set of immunity-enhancing features, thereby achieving cross-model transferability. Extensive experimental results demonstrate that our TDAE achieves state-of-the-art performance in mitigating malicious edits under both intra- and cross-model evaluations.

最近在输入图像中使用难以察觉的扰动的方法已经证明了在基于扩散的图像编辑系统中对抗恶意操作的有希望的潜力。然而,现有方法在跨模型评估中可移植性有限。为了解决这个问题,我们提出了可转移的恶意图像编辑防御(TDAE),这是一个新的双峰框架,通过协调的图像-文本优化来增强图像对恶意编辑的免疫力。具体来说,在视觉防御层面,我们引入了FlatGrad防御机制(FDM),该机制将梯度正则化融入对抗目标中。通过明确地将扰动转向平坦最小值,FDM放大了对未知编辑模型的免疫鲁棒性。对于文本增强保护,我们提出了一种名为动态提示防御(Dynamic Prompt Defense, DPD)的对抗优化范式,该范式定期对文本嵌入进行优化,使免疫后的图像编辑结果与原始图像的编辑结果保持一致,然后更新经过优化嵌入的图像。通过对不同嵌入的迭代对抗性更新,DPD强制生成免疫图像,寻求更广泛的免疫增强特征集,从而实现跨模型可转移性。广泛的实验结果表明,我们的TDAE在减轻内部和跨模型评估下的恶意编辑方面达到了最先进的性能。
{"title":"Towards Transferable Defense Against Malicious Image Edits.","authors":"Jie Zhang, Shuai Dong, Shiguang Shan, Xilin Chen","doi":"10.1109/TPAMI.2026.3670292","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3670292","url":null,"abstract":"<p><p>Recent approaches employing imperceptible perturbations in input images have demonstrated promising potential to counter malicious manipulations in diffusion-based image editing systems. However, existing methods suffer from limited transferability in cross-model evaluations. To address this, we propose Transferable Defense Against Malicious Image Edits (TDAE), a novel bimodal framework that enhances image immunity against malicious edits through coordinated image-text optimization. Specifically, at the visual defense level, we introduce FlatGrad Defense Mechanism (FDM), which incorporates gradient regularization into the adversarial objective. By explicitly steering the perturbations toward flat minima, FDM amplifies immune robustness against unseen editing models. For textual enhancement protection, we propose an adversarial optimization paradigm named Dynamic Prompt Defense (DPD), which periodically refines text embeddings to align the editing outcomes of immunized images with those of the original images, then updates the images under optimized embeddings. Through iterative adversarial updates to diverse embeddings, DPD enforces the generation of immunized images that seek a broader set of immunity-enhancing features, thereby achieving cross-model transferability. Extensive experimental results demonstrate that our TDAE achieves state-of-the-art performance in mitigating malicious edits under both intra- and cross-model evaluations.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147358026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reservoir-Based Graph Convolutional Networks. 基于储层的图卷积网络。
IF 18.6 Pub Date : 2026-03-04 DOI: 10.1109/TPAMI.2026.3670423
Mayssa Soussia, Gita Ayu Salsabila, Mohamed Ali Mahjoub, Islem Rekik

Message passing is a core mechanism in Graph Neural Networks (GNNs), enabling the iterative update of node embeddings by aggregating information from neighboring nodes. Graph Convolutional Networks (GCNs) exemplify this approach by adapting convolutional operations for graph structures, allowing features from adjacent nodes to be combined effectively. However, GCNs encounter challenges with complex or dynamic data. Capturing long-range dependencies often requires deeper layers, which not only increase computational costs but also lead to over-smoothing, where node embeddings become indistinguishable. To overcome these challenges, reservoir computing has been integrated into GNNs, leveraging iterative message-passing dynamics for stable information propagation without extensive parameter tuning. Despite its promise, existing reservoir-based models lack structured convolutional mechanisms, limiting their ability to accurately aggregate multi-hop neighborhood information. To address these limitations, we propose RGC-Net (Reservoir-based Graph Convolutional Network), which integrates reservoir dynamics with structured graph convolution. Key contributions include: (i) a reimagined convolutional framework with fixed-random reservoir weights and a leaky integrator to enhance feature retention; (ii) a robust, adaptable model for graph classification; and (iii) an RGC-Net-powered transformer for graph generation with application to dynamic brain connectivity. Extensive experiments show RGC-Net achieves state-of-the-art performance in classification and generative tasks, including brain graph evolution, with faster convergence and mitigated over-smoothing. Our source code is available at https://github.com/basiralab/RGC-Net.

消息传递是图神经网络(Graph Neural Networks, gnn)的核心机制,它通过聚合相邻节点的信息来实现节点嵌入的迭代更新。图卷积网络(GCNs)通过对图结构进行卷积操作来证明这种方法,允许相邻节点的特征有效地组合在一起。然而,GCNs遇到复杂或动态数据的挑战。捕获远程依赖关系通常需要更深层的层,这不仅会增加计算成本,还会导致过度平滑,导致节点嵌入变得无法区分。为了克服这些挑战,油藏计算已经集成到gnn中,利用迭代消息传递动态来稳定信息传播,而无需大量的参数调整。尽管它很有前景,但现有的基于水库的模型缺乏结构化的卷积机制,限制了它们准确聚合多跳邻域信息的能力。为了解决这些限制,我们提出了RGC-Net(基于油藏的图卷积网络),它将油藏动态与结构图卷积相结合。主要贡献包括:(i)一个具有固定随机库权重和泄漏积分器的重新想象的卷积框架,以增强特征保留;(ii)一个鲁棒的、适应性强的图分类模型;(iii) rgc - net驱动的用于图形生成的变压器,应用于动态大脑连接。大量实验表明,RGC-Net在分类和生成任务(包括脑图进化)方面实现了最先进的性能,具有更快的收敛和减轻的过度平滑。我们的源代码可从https://github.com/basiralab/RGC-Net获得。
{"title":"Reservoir-Based Graph Convolutional Networks.","authors":"Mayssa Soussia, Gita Ayu Salsabila, Mohamed Ali Mahjoub, Islem Rekik","doi":"10.1109/TPAMI.2026.3670423","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3670423","url":null,"abstract":"<p><p>Message passing is a core mechanism in Graph Neural Networks (GNNs), enabling the iterative update of node embeddings by aggregating information from neighboring nodes. Graph Convolutional Networks (GCNs) exemplify this approach by adapting convolutional operations for graph structures, allowing features from adjacent nodes to be combined effectively. However, GCNs encounter challenges with complex or dynamic data. Capturing long-range dependencies often requires deeper layers, which not only increase computational costs but also lead to over-smoothing, where node embeddings become indistinguishable. To overcome these challenges, reservoir computing has been integrated into GNNs, leveraging iterative message-passing dynamics for stable information propagation without extensive parameter tuning. Despite its promise, existing reservoir-based models lack structured convolutional mechanisms, limiting their ability to accurately aggregate multi-hop neighborhood information. To address these limitations, we propose RGC-Net (Reservoir-based Graph Convolutional Network), which integrates reservoir dynamics with structured graph convolution. Key contributions include: (i) a reimagined convolutional framework with fixed-random reservoir weights and a leaky integrator to enhance feature retention; (ii) a robust, adaptable model for graph classification; and (iii) an RGC-Net-powered transformer for graph generation with application to dynamic brain connectivity. Extensive experiments show RGC-Net achieves state-of-the-art performance in classification and generative tasks, including brain graph evolution, with faster convergence and mitigated over-smoothing. Our source code is available at https://github.com/basiralab/RGC-Net.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147358083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Data Influence With Differential Approximation. 用微分逼近理解数据影响。
IF 18.6 Pub Date : 2026-03-04 DOI: 10.1109/TPAMI.2026.3670471
Haoru Tan, Sitong Wu, Xiuzhe Wu, Wang Wang, Bo Zhao, Zeke Xie, Gui-Song Xia, Xiaojuan Qi

Data plays a pivotal role in the groundbreaking advancements in artificial intelligence. The quantitative analysis of data significantly contributes to model training, enhancing both the efficiency and quality of data utilization. However, existing data analysis tools often lag in accuracy. For instance, many of these tools even assume that the loss function of neural networks is convex. These limitations make it challenging to implement current methods effectively. In this paper, we introduce a new formulation to approximate a sample's influence by accumulating the differences in influence between consecutive learning steps, which we term Diff-In. Specifically, we formulate the sample-wise influence as the cumulative sum of its changes/differences across successive training iterations. By employing second-order approximations, we approximate these difference terms with high accuracy while eliminating the need for model convexity required by existing methods. Despite being a second-order method, Diff-In maintains computational complexity comparable to that of first-order methods and remains scalable. This efficiency is achieved by computing the product of the Hessian and gradient, which can be efficiently approximated using finite differences of first-order gradients. We assess the approximation accuracy of Diff-In both theoretically and empirically. Our theoretical analysis demonstrates that Diff-In achieves significantly lower approximation error compared to existing influence estimators. Extensive experiments further confirm its superior performance across multiple benchmark datasets in three data-centric tasks: data cleaning, data deletion, and coreset selection. Notably, our experiments on data pruning for large-scale vision-language pre-training show that Diff-In can scale to millions of data points and outperforms strong baselines.

数据在人工智能的突破性发展中发挥着关键作用。数据的定量分析有助于模型训练,提高数据利用的效率和质量。然而,现有的数据分析工具往往在准确性上滞后。例如,许多这些工具甚至假设神经网络的损失函数是凸的。这些限制使得有效地实现当前方法具有挑战性。在本文中,我们引入了一个新的公式,通过累积连续学习步骤之间的影响差异来近似样本的影响,我们称之为diffin。具体来说,我们将样本影响表述为其在连续训练迭代中的变化/差异的累积和。通过采用二阶近似,我们以较高的精度近似这些差分项,同时消除了现有方法对模型凸性的要求。尽管是二阶方法,但diffin保持了与一阶方法相当的计算复杂度,并且保持了可伸缩性。这种效率是通过计算Hessian和梯度的乘积来实现的,它可以用一阶梯度的有限差分有效地逼近。我们从理论上和经验上评估了diffin的近似精度。我们的理论分析表明,与现有的影响估计器相比,diffin实现了明显更低的近似误差。大量的实验进一步证实了它在三个以数据为中心的任务中跨多个基准数据集的卓越性能:数据清理、数据删除和核心集选择。值得注意的是,我们对大规模视觉语言预训练的数据修剪实验表明,diffin可以扩展到数百万个数据点,并且优于强基线。
{"title":"Understanding Data Influence With Differential Approximation.","authors":"Haoru Tan, Sitong Wu, Xiuzhe Wu, Wang Wang, Bo Zhao, Zeke Xie, Gui-Song Xia, Xiaojuan Qi","doi":"10.1109/TPAMI.2026.3670471","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3670471","url":null,"abstract":"<p><p>Data plays a pivotal role in the groundbreaking advancements in artificial intelligence. The quantitative analysis of data significantly contributes to model training, enhancing both the efficiency and quality of data utilization. However, existing data analysis tools often lag in accuracy. For instance, many of these tools even assume that the loss function of neural networks is convex. These limitations make it challenging to implement current methods effectively. In this paper, we introduce a new formulation to approximate a sample's influence by accumulating the differences in influence between consecutive learning steps, which we term Diff-In. Specifically, we formulate the sample-wise influence as the cumulative sum of its changes/differences across successive training iterations. By employing second-order approximations, we approximate these difference terms with high accuracy while eliminating the need for model convexity required by existing methods. Despite being a second-order method, Diff-In maintains computational complexity comparable to that of first-order methods and remains scalable. This efficiency is achieved by computing the product of the Hessian and gradient, which can be efficiently approximated using finite differences of first-order gradients. We assess the approximation accuracy of Diff-In both theoretically and empirically. Our theoretical analysis demonstrates that Diff-In achieves significantly lower approximation error compared to existing influence estimators. Extensive experiments further confirm its superior performance across multiple benchmark datasets in three data-centric tasks: data cleaning, data deletion, and coreset selection. Notably, our experiments on data pruning for large-scale vision-language pre-training show that Diff-In can scale to millions of data points and outperforms strong baselines.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147358079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flexible-weighted Chamfer Distance: Enhanced Objective Function for Point Cloud Completion. 柔性加权倒角距离:点云补全的增强目标函数。
IF 18.6 Pub Date : 2026-03-03 DOI: 10.1109/TPAMI.2026.3669003
Jie Li, Shengwei Tian, Long Yu, Xin Ning

The Chamfer Distance (CD) is a cornerstone objective function for point cloud completion, yet its inherent symmetric weighting mechanism limits the quality of the generated results. By penalizing local detail deviations and global coverage deficiencies equally, standard CD often causes structural defects such as point aggregation and incomplete spatial structures. We introduce the Flexible-weighted Chamfer Distance (FCD), which decouples CD into local precision and global completeness sub-objectives. FCD employs an asymmetric weighting strategy that prioritizes global structural integrity, steering the optimization away from sub-optimal solutions. As a plug-and-play module with negligible overhead, extensive experiments on state-of-the-art networks demonstrate that FCD significantly enhances global distribution metrics while preserving local precision. Specifically, on the ShapeNet55 benchmark using AdaPoinTr, FCD reduces the Density-aware Chamfer Distance (DCD) by approximately 12.4% (from 0.613 to 0.537), effectively mitigating point clustering. Similarly, on the PCN dataset, the proposed method reduces the Earth Mover's Distance (EMD) from 23.79 to 21.40, demonstrating superior global uniformity compared to the standard CD baseline. Furthermore, FCD demonstrates excellent generalization. When applied to diverse tasks and datasets, including real-world scans (KITTI), industrial components (ABC), and point cloud upsampling (PU-GAN), it yields significant quantitative gains and produces visually more uniform and structurally complete point clouds. These results underscore FCD's potential as a versatile objective function for the broader point cloud generation domain.

倒角距离(CD)是点云补全的基础目标函数,但其固有的对称加权机制限制了生成结果的质量。通过惩罚局部细节偏差和全球覆盖缺陷,标准CD通常会导致结构缺陷,例如点聚集和不完整的空间结构。引入柔性加权倒角距离(FCD),将倒角距离解耦为局部精度子目标和全局完备子目标。FCD采用非对称加权策略,优先考虑整体结构的完整性,从而使优化远离次优解决方案。作为一个可忽略开销的即插即用模块,在最先进的网络上进行的大量实验表明,FCD显著提高了全局分布指标,同时保持了局部精度。具体来说,在使用AdaPoinTr的ShapeNet55基准测试中,FCD将密度感知倒角距离(DCD)减少了大约12.4%(从0.613减少到0.537),有效地缓解了点聚类。同样,在PCN数据集上,该方法将地球移动距离(EMD)从23.79降低到21.40,与标准CD基线相比,显示出更好的全球均匀性。此外,FCD还具有很好的通用性。当应用于不同的任务和数据集时,包括真实世界扫描(KITTI),工业组件(ABC)和点云上采样(PU-GAN),它可以产生显着的定量增益,并产生视觉上更均匀和结构完整的点云。这些结果强调了FCD作为更广泛的点云生成领域的通用目标函数的潜力。
{"title":"Flexible-weighted Chamfer Distance: Enhanced Objective Function for Point Cloud Completion.","authors":"Jie Li, Shengwei Tian, Long Yu, Xin Ning","doi":"10.1109/TPAMI.2026.3669003","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3669003","url":null,"abstract":"<p><p>The Chamfer Distance (CD) is a cornerstone objective function for point cloud completion, yet its inherent symmetric weighting mechanism limits the quality of the generated results. By penalizing local detail deviations and global coverage deficiencies equally, standard CD often causes structural defects such as point aggregation and incomplete spatial structures. We introduce the Flexible-weighted Chamfer Distance (FCD), which decouples CD into local precision and global completeness sub-objectives. FCD employs an asymmetric weighting strategy that prioritizes global structural integrity, steering the optimization away from sub-optimal solutions. As a plug-and-play module with negligible overhead, extensive experiments on state-of-the-art networks demonstrate that FCD significantly enhances global distribution metrics while preserving local precision. Specifically, on the ShapeNet55 benchmark using AdaPoinTr, FCD reduces the Density-aware Chamfer Distance (DCD) by approximately 12.4% (from 0.613 to 0.537), effectively mitigating point clustering. Similarly, on the PCN dataset, the proposed method reduces the Earth Mover's Distance (EMD) from 23.79 to 21.40, demonstrating superior global uniformity compared to the standard CD baseline. Furthermore, FCD demonstrates excellent generalization. When applied to diverse tasks and datasets, including real-world scans (KITTI), industrial components (ABC), and point cloud upsampling (PU-GAN), it yields significant quantitative gains and produces visually more uniform and structurally complete point clouds. These results underscore FCD's potential as a versatile objective function for the broader point cloud generation domain.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147349843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Equilibrium Between Feasible Zone and Uncertain Model in Safe Exploration. 安全勘探中可行区与不确定模型的平衡。
IF 18.6 Pub Date : 2026-03-03 DOI: 10.1109/TPAMI.2026.3669907
Yujie Yang, Zhilong Zheng, Shengbo Eben Li

Ensuring the safety of environmental exploration is a critical problem in reinforcement learning (RL). While limiting exploration to a feasible zone has become widely accepted as a way to ensure safety, key questions remain unresolved: what is the maximum feasible zone achievable through exploration, and how can it be identified? This paper, for the first time, answers these questions by revealing that the goal of safe exploration is to find the equilibrium between the feasible zone and the environment model. This conclusion is based on the understanding that these two components are interdependent: a larger feasible zone leads to a more accurate environment model, and a more accurate model, in turn, enables exploring a larger zone. We propose the first equilibrium-oriented safe exploration framework called safe equilibrium exploration (SEE), which alternates between finding the maximum feasible zone and the least uncertain model. Using a graph formulation of the uncertain model, we prove that the uncertain model obtained by SEE is monotonically refined, the feasible zones monotonically expand, and both converge to the equilibrium of safe exploration. Experiments on classic control tasks show that our algorithm successfully expands the feasible zones with zero constraint violation, and achieves the equilibrium of safe exploration within a few iterations.

确保环境勘探的安全性是强化学习(RL)中的关键问题。虽然将勘探限制在可行区域已被广泛接受为确保安全的一种方式,但关键问题仍未解决:通过勘探可达到的最大可行区域是什么?如何确定?本文首次回答了这些问题,揭示了安全勘探的目标是寻找可行区与环境模型之间的平衡点。这一结论是基于这两个组成部分是相互依存的理解:更大的可行区域导致更准确的环境模型,而更准确的模型反过来又可以探索更大的区域。我们提出了第一种面向均衡的安全探索框架,称为安全均衡探索(SEE),它在寻找最大可行区和最小不确定性模型之间交替进行。利用不确定模型的图化形式,证明了由SEE得到的不确定模型是单调细化的,可行区域是单调扩展的,两者收敛于安全探索的平衡点。对经典控制任务的实验表明,该算法成功地扩展了零约束违反的可行区域,并在几次迭代内实现了安全探索的平衡。
{"title":"On the Equilibrium Between Feasible Zone and Uncertain Model in Safe Exploration.","authors":"Yujie Yang, Zhilong Zheng, Shengbo Eben Li","doi":"10.1109/TPAMI.2026.3669907","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3669907","url":null,"abstract":"<p><p>Ensuring the safety of environmental exploration is a critical problem in reinforcement learning (RL). While limiting exploration to a feasible zone has become widely accepted as a way to ensure safety, key questions remain unresolved: what is the maximum feasible zone achievable through exploration, and how can it be identified? This paper, for the first time, answers these questions by revealing that the goal of safe exploration is to find the equilibrium between the feasible zone and the environment model. This conclusion is based on the understanding that these two components are interdependent: a larger feasible zone leads to a more accurate environment model, and a more accurate model, in turn, enables exploring a larger zone. We propose the first equilibrium-oriented safe exploration framework called safe equilibrium exploration (SEE), which alternates between finding the maximum feasible zone and the least uncertain model. Using a graph formulation of the uncertain model, we prove that the uncertain model obtained by SEE is monotonically refined, the feasible zones monotonically expand, and both converge to the equilibrium of safe exploration. Experiments on classic control tasks show that our algorithm successfully expands the feasible zones with zero constraint violation, and achieves the equilibrium of safe exploration within a few iterations.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147349826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1