首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
Disentangling Consistent and Specific Information for Double Incomplete Multi-View Multi-Label Classification. 双重不完全多视图多标签分类中一致性和特定信息的解纠缠。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3665097
Jie Wen, Lian Zhao, Xiaohuan Lu, Chengliang Liu, Li Shen, Chao Huang, Yong Xu

As a prominent research topic, multi-view multi-label classification (MvMlC) aims to assign multiple labels to samples by integrating information from various perspectives. However, in real-world scenarios, MvMlC frequently faces the learning challenge of data with missing views and labels, typically resulting from sensor malfunctions, or the costly and time-consuming process of manual annotation. In addition, learning robust representations that are both consistent across views and specific to individual views remains a challenge. To address these issues, we propose a novel double incomplete multi-view multi-label classification framework based on Disentangling Consistent and Specific Information (DCSI). Specifically, we employ a dual-channel encoder with identical architecture but distinct objectives to extract cross-view consistent information and view-specific unique information from all views, respectively. Meanwhile, a view discriminator is constructed to decouple these two types of information, facilitating the extraction of pure consistent and specific information. Moreover, we meticulously design fusion strategies tailored to each representation type. Regarding consistent representations, we propose a dynamic-confidence-aware fusion mechanism that assesses the reliability of each view's representations in relation to the classification task, enabling the model to prioritize information from trustworthy representations. For specific representations, in light of their complementary rather than redundant property, we suggest treating such representations from each view equally to ensure fairness. Through experimental validation on five datasets, the results demonstrate that our method outperforms existing state-of-the-art methods.

多视图多标签分类(multi-view multi-label classification, mvlc)是一个突出的研究课题,其目的是通过整合不同角度的信息,为样本分配多个标签。然而,在现实场景中,MvMlC经常面临缺少视图和标签的数据的学习挑战,这通常是由于传感器故障或手动注释的昂贵且耗时的过程造成的。此外,学习跨视图一致且特定于单个视图的鲁棒表示仍然是一个挑战。为了解决这些问题,我们提出了一种新的基于Disentangling Consistent and Specific Information (DCSI)的双不完全多视图多标签分类框架。具体来说,我们采用具有相同架构但目标不同的双通道编码器,分别从所有视图中提取跨视图一致信息和特定于视图的唯一信息。同时,构造了一个视图鉴别器来解耦这两类信息,便于提取纯粹的一致性信息和特定信息。此外,我们精心设计了适合每种表示类型的融合策略。关于一致性表示,我们提出了一种动态置信度感知融合机制,该机制评估每个视图表示与分类任务相关的可靠性,使模型能够优先考虑来自可信表示的信息。对于具体的表述,鉴于其互补而非冗余的性质,我们建议从每个角度平等地对待这些表述,以确保公平。通过在五个数据集上的实验验证,结果表明我们的方法优于现有的最先进的方法。
{"title":"Disentangling Consistent and Specific Information for Double Incomplete Multi-View Multi-Label Classification.","authors":"Jie Wen, Lian Zhao, Xiaohuan Lu, Chengliang Liu, Li Shen, Chao Huang, Yong Xu","doi":"10.1109/TPAMI.2026.3665097","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3665097","url":null,"abstract":"<p><p>As a prominent research topic, multi-view multi-label classification (MvMlC) aims to assign multiple labels to samples by integrating information from various perspectives. However, in real-world scenarios, MvMlC frequently faces the learning challenge of data with missing views and labels, typically resulting from sensor malfunctions, or the costly and time-consuming process of manual annotation. In addition, learning robust representations that are both consistent across views and specific to individual views remains a challenge. To address these issues, we propose a novel double incomplete multi-view multi-label classification framework based on Disentangling Consistent and Specific Information (DCSI). Specifically, we employ a dual-channel encoder with identical architecture but distinct objectives to extract cross-view consistent information and view-specific unique information from all views, respectively. Meanwhile, a view discriminator is constructed to decouple these two types of information, facilitating the extraction of pure consistent and specific information. Moreover, we meticulously design fusion strategies tailored to each representation type. Regarding consistent representations, we propose a dynamic-confidence-aware fusion mechanism that assesses the reliability of each view's representations in relation to the classification task, enabling the model to prioritize information from trustworthy representations. For specific representations, in light of their complementary rather than redundant property, we suggest treating such representations from each view equally to ensure fairness. Through experimental validation on five datasets, the results demonstrate that our method outperforms existing state-of-the-art methods.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Heat Dissipation: Optimizing Diffusion Models in Frequency Domain. 超越散热:优化频域扩散模型。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3666860
Qisen Wang, Yifan Zhao, Jia Li

The majority of standard diffusion models employ pixel-wise degradations while neglecting multi-scale characteristics of images. Recently, generalized diffusion models with Positive Semi-definite Degradations (PSD), such as heat dissipation and blurring, have been proposed to solve it, but suffering from problems of low generation quality due to incomplete optimization analysis and non-adaptiveness to the training process and different data distributions with hand-crafted and fixed inductive biases. In this paper, we present a comprehensive theoretical analysis of the optimization process in frequency domain for PSD-based generalized diffusion models, which implies the forward process of PSD frequency domain non-isotropic degradation implicitly acting on the inductive biases of the Variational Lower Bound non-isotropic weighting in the optimization reverse process. Based on this insight, we propose the Frequency Inductive Biases Bootstrapping Optimization (FIBBO) method, which parameterizes the forward process and learns distinct frequency degradation-generation trajectories iteratively. To tackle the problem of PSD hand-crafted and fixed inductive biases, FIBBO dynamically modifies the non-isotropic Gaussian kernel of the forward degradation process so that the inductive biases introduced can be adjusted adaptively during training. Experiments on public datasets show that FIBBO makes significant improvements in the generation quality of PSD-based generalized diffusion models. The code will be publicly available.

大多数标准扩散模型采用逐像素的退化,而忽略了图像的多尺度特征。近年来,人们提出了具有正半确定退化(PSD)的广义扩散模型(如散热和模糊)来解决这一问题,但由于优化分析不完全和对训练过程的不适应以及手工制作和固定归纳偏差的数据分布不同,导致生成质量低。本文对基于PSD的广义扩散模型的频域优化过程进行了全面的理论分析,揭示了PSD频域非各向同性退化的正向过程隐式地作用于优化逆过程中变分下界非各向同性加权的归纳偏差。基于此,我们提出了频率感应偏差自举优化(FIBBO)方法,该方法参数化前向过程并迭代学习不同的频率退化生成轨迹。为了解决PSD手工制作和固定的归纳偏差问题,FIBBO动态修改前向退化过程的非各向同性高斯核,使引入的归纳偏差可以在训练过程中自适应调整。在公共数据集上的实验表明,FIBBO显著提高了基于psd的广义扩散模型的生成质量。代码将是公开的。
{"title":"Beyond Heat Dissipation: Optimizing Diffusion Models in Frequency Domain.","authors":"Qisen Wang, Yifan Zhao, Jia Li","doi":"10.1109/TPAMI.2026.3666860","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3666860","url":null,"abstract":"<p><p>The majority of standard diffusion models employ pixel-wise degradations while neglecting multi-scale characteristics of images. Recently, generalized diffusion models with Positive Semi-definite Degradations (PSD), such as heat dissipation and blurring, have been proposed to solve it, but suffering from problems of low generation quality due to incomplete optimization analysis and non-adaptiveness to the training process and different data distributions with hand-crafted and fixed inductive biases. In this paper, we present a comprehensive theoretical analysis of the optimization process in frequency domain for PSD-based generalized diffusion models, which implies the forward process of PSD frequency domain non-isotropic degradation implicitly acting on the inductive biases of the Variational Lower Bound non-isotropic weighting in the optimization reverse process. Based on this insight, we propose the Frequency Inductive Biases Bootstrapping Optimization (FIBBO) method, which parameterizes the forward process and learns distinct frequency degradation-generation trajectories iteratively. To tackle the problem of PSD hand-crafted and fixed inductive biases, FIBBO dynamically modifies the non-isotropic Gaussian kernel of the forward degradation process so that the inductive biases introduced can be adjusted adaptively during training. Experiments on public datasets show that FIBBO makes significant improvements in the generation quality of PSD-based generalized diffusion models. The code will be publicly available.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards the Spectral bias Alleviation by Normalizations in Coordinate Networks. 用归一化方法缓解坐标网络中的谱偏。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3667002
Zhicheng Cai, Hao Zhu, Qiu Shen, Xinran Wang, Xun Cao

Representing signals using coordinate networks dominates the area of inverse problems recently, and is widely applied in various scientific computing tasks. Still, there exists an issue of spectral bias in coordinate networks, limiting the capacity to learn high-frequency components. This problem is caused by the pathological distribution of the neural tangent kernel's (NTK's) eigenvalues of coordinate networks. We find that, this pathological distribution could be improved using classical normalization techniques (batch normalization and layer normalization), which are commonly used in convolutional neural networks but rarely used in coordinate networks. We prove that normalization techniques greatly reduces the maximum and variance of NTK's eigenvalues while slightly modifies the mean value, considering the max eigenvalue is much larger than the most, this variance change results in a shift of eigenvalues' distribution from a lower one to a higher one, therefore the spectral bias could be alleviated (see Fig. 1). Furthermore, we propose two new normalization techniques by combining these two techniques in different ways. The efficacy of these normalization techniques is substantiated by the significant improvements and new state-of-the-arts achieved by applying normalization-based coordinate networks to various tasks, including the image compression, computed tomography reconstruction, shape representation, magnetic resonance imaging, novel view synthesis and multi-view stereo reconstruction.

利用坐标网络表示信号是近年来反问题研究的主流,在各种科学计算任务中得到了广泛的应用。然而,在坐标网络中存在频谱偏差的问题,限制了学习高频分量的能力。这一问题是由于神经切线核特征值在坐标网络中的病态分布造成的。我们发现,这种病态分布可以使用经典的归一化技术(批归一化和层归一化)来改善,这些技术通常用于卷积神经网络,但很少用于坐标网络。我们证明了归一化技术极大地降低了NTK特征值的最大值和方差,同时对均值进行了轻微的修改,由于最大特征值比大多数特征值大得多,这种方差的变化导致特征值的分布从较低的特征值向较高的特征值偏移,因此可以减轻谱偏差(见图1)。此外,我们以不同的方式结合这两种技术,提出了两种新的规范化技术。通过将基于归一化的坐标网络应用于各种任务,包括图像压缩、计算机断层扫描重建、形状表示、磁共振成像、新视图合成和多视图立体重建,这些归一化技术的有效性得到了显著的改进和最新技术的证实。
{"title":"Towards the Spectral bias Alleviation by Normalizations in Coordinate Networks.","authors":"Zhicheng Cai, Hao Zhu, Qiu Shen, Xinran Wang, Xun Cao","doi":"10.1109/TPAMI.2026.3667002","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3667002","url":null,"abstract":"<p><p>Representing signals using coordinate networks dominates the area of inverse problems recently, and is widely applied in various scientific computing tasks. Still, there exists an issue of spectral bias in coordinate networks, limiting the capacity to learn high-frequency components. This problem is caused by the pathological distribution of the neural tangent kernel's (NTK's) eigenvalues of coordinate networks. We find that, this pathological distribution could be improved using classical normalization techniques (batch normalization and layer normalization), which are commonly used in convolutional neural networks but rarely used in coordinate networks. We prove that normalization techniques greatly reduces the maximum and variance of NTK's eigenvalues while slightly modifies the mean value, considering the max eigenvalue is much larger than the most, this variance change results in a shift of eigenvalues' distribution from a lower one to a higher one, therefore the spectral bias could be alleviated (see Fig. 1). Furthermore, we propose two new normalization techniques by combining these two techniques in different ways. The efficacy of these normalization techniques is substantiated by the significant improvements and new state-of-the-arts achieved by applying normalization-based coordinate networks to various tasks, including the image compression, computed tomography reconstruction, shape representation, magnetic resonance imaging, novel view synthesis and multi-view stereo reconstruction.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DrivingGaussian++: Towards Realistic Reconstruction and Editable Simulation for Surrounding Dynamic Driving Scenes. 驱动高斯++:对周围动态驾驶场景的现实重建和可编辑模拟。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3667072
Yajiao Xiong, Xiaoyu Zhou, Yongtao Wang, Deqing Sun, Ming-Hsuan Yang

We present DrivingGaussian++, an efficient and effective framework for realistic reconstruction and controllable editing of surrounding dynamic autonomous driving scenes. DrivingGaussian++ models the static background with incremental 3D Gaussians and reconstructs moving objects with a composite dynamic Gaussian graph, ensuring accurate positions and occlusions. By integrating a LiDAR prior, it achieves detailed and consistent scene reconstruction, outperforming existing methods in dynamic scene reconstruction and photorealistic surround-view synthesis. DrivingGaussian++ supports training-free controllable editing for dynamic driving scenes, including texture modification, weather simulation, and object manipulation, leveraging multi-view images and depth priors. By integrating large language models (LLMs) and controllable editing, our method can automatically generate dynamic object motion trajectories and enhance their realism during the optimization process. DrivingGaussian++ demonstrates consistent and realistic editing results and generates dynamic multi-view driving scenarios, while significantly enhancing scene diversity.

我们提出了一个高效的框架driinggaussian ++,用于对周围动态自动驾驶场景进行逼真的重建和可控编辑。driinggaussian ++使用增量三维高斯模型对静态背景进行建模,并使用复合动态高斯图重建运动物体,确保准确的位置和遮挡。通过集成LiDAR先验,实现了详细和一致的场景重建,优于现有的动态场景重建和逼真的环视合成方法。driinggaussian ++支持动态驾驶场景的无训练可控编辑,包括纹理修改,天气模拟和对象操作,利用多视图图像和深度先验。该方法将大语言模型(llm)与可控编辑相结合,可以自动生成动态物体运动轨迹,并在优化过程中增强其真实感。driinggaussian ++展示了一致和逼真的编辑结果,并生成动态多视图驾驶场景,同时显着增强了场景多样性。
{"title":"DrivingGaussian++: Towards Realistic Reconstruction and Editable Simulation for Surrounding Dynamic Driving Scenes.","authors":"Yajiao Xiong, Xiaoyu Zhou, Yongtao Wang, Deqing Sun, Ming-Hsuan Yang","doi":"10.1109/TPAMI.2026.3667072","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3667072","url":null,"abstract":"<p><p>We present DrivingGaussian++, an efficient and effective framework for realistic reconstruction and controllable editing of surrounding dynamic autonomous driving scenes. DrivingGaussian++ models the static background with incremental 3D Gaussians and reconstructs moving objects with a composite dynamic Gaussian graph, ensuring accurate positions and occlusions. By integrating a LiDAR prior, it achieves detailed and consistent scene reconstruction, outperforming existing methods in dynamic scene reconstruction and photorealistic surround-view synthesis. DrivingGaussian++ supports training-free controllable editing for dynamic driving scenes, including texture modification, weather simulation, and object manipulation, leveraging multi-view images and depth priors. By integrating large language models (LLMs) and controllable editing, our method can automatically generate dynamic object motion trajectories and enhance their realism during the optimization process. DrivingGaussian++ demonstrates consistent and realistic editing results and generates dynamic multi-view driving scenarios, while significantly enhancing scene diversity.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSD: Making Face Forgery Clues Evident Again With Self-Steganographic Detection. SSD:使面部伪造线索明显再次与自我隐写检测。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3667180
Ruiyang Xia, Dawei Zhou, Lin Yuan, Jie Li, Nannan Wang, Xinbo Gao

The rapid development of generative AI techniques enables the synthesis of highly realistic facial images, posing significant challenges for the accurate detection of face forgeries. In contrast to solely elevating detector awareness, proactively reducing the intrinsic difficulty of forgery detection can streamline detector complexity while improving both generalization and robustness. This insight motivates our defense strategy to make face forgery clues more evident. Specifically, a novel proactive approach dubbed Self-Steganographic Detection (SSD) is proposed to imperceptibly embed facial images into themselves as a form of detection evidence. The recovery process is designed to remain robust under normal manipulations while exhibiting deliberate degradation under malicious manipulations, thereby clearly revealing potential forgeries. Unlike embedding bit-level vectors, pixel-level images are informative to ensure the generalization of our approach. Due to the similarity between the protected and embedded images, SSD performs detection without storing any embedded information in advance. To support practical deployment, our approach incorporates a dual detection scheme that aims to identify unprotected images and determine the authenticity of protected images. Extensive experiments using 8 face forgery techniques demonstrate the effectiveness of our approach compared to state-of-the-art methods.

生成式人工智能技术的快速发展使合成高度逼真的面部图像成为可能,这对准确检测面部伪造构成了重大挑战。与单纯提高检测意识相比,主动降低伪造检测的内在难度可以简化检测复杂性,同时提高泛化和鲁棒性。这种洞察力促使我们采取防御策略,使面部伪造的线索更加明显。具体来说,提出了一种新的主动方法,称为自隐写检测(SSD),它可以在不知不觉中将面部图像嵌入到自身中作为一种检测证据。恢复过程被设计为在正常操作下保持稳健,而在恶意操作下表现出故意退化,从而清楚地揭示潜在的伪造。与嵌入位级向量不同,像素级图像具有信息性,以确保我们的方法的泛化。由于受保护映像与嵌入映像的相似性,SSD在执行检测时不会预先存储任何嵌入信息。为了支持实际部署,我们的方法采用了双重检测方案,旨在识别未受保护的映像并确定受保护映像的真实性。与最先进的方法相比,使用8种面部伪造技术的广泛实验证明了我们的方法的有效性。
{"title":"SSD: Making Face Forgery Clues Evident Again With Self-Steganographic Detection.","authors":"Ruiyang Xia, Dawei Zhou, Lin Yuan, Jie Li, Nannan Wang, Xinbo Gao","doi":"10.1109/TPAMI.2026.3667180","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3667180","url":null,"abstract":"<p><p>The rapid development of generative AI techniques enables the synthesis of highly realistic facial images, posing significant challenges for the accurate detection of face forgeries. In contrast to solely elevating detector awareness, proactively reducing the intrinsic difficulty of forgery detection can streamline detector complexity while improving both generalization and robustness. This insight motivates our defense strategy to make face forgery clues more evident. Specifically, a novel proactive approach dubbed Self-Steganographic Detection (SSD) is proposed to imperceptibly embed facial images into themselves as a form of detection evidence. The recovery process is designed to remain robust under normal manipulations while exhibiting deliberate degradation under malicious manipulations, thereby clearly revealing potential forgeries. Unlike embedding bit-level vectors, pixel-level images are informative to ensure the generalization of our approach. Due to the similarity between the protected and embedded images, SSD performs detection without storing any embedded information in advance. To support practical deployment, our approach incorporates a dual detection scheme that aims to identify unprotected images and determine the authenticity of protected images. Extensive experiments using 8 face forgery techniques demonstrate the effectiveness of our approach compared to state-of-the-art methods.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Velocity Disambiguation for Video Frame Interpolation. 视频帧插值的速度消歧。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3667437
Zhihang Zhong, Yiming Zhang, Wei Wang, Xiao Sun, Yu Qiao, Gurunandan Krishnan, Sizhuo Ma, Jian Wang

Existing video frame interpolation (VFI) methods blindly predict where each object is at a specific timestep $t$ ("time indexing"), which struggles to predict precise object movements. Given two images of a baseball, there are infinitely many possible trajectories: accelerating or decelerating, straight or curved. This often results in blurry frames as the method averages out these possibilities. Instead of forcing the network to learn this complicated time-to-location mapping implicitly together with predicting the frames, we provide the network with an explicit hint on how far the object has traveled between start and end frames, a novel approach termed "distance indexing". This method offers a clearer learning goal for models, reducing the uncertainty tied to object speeds. We further observed that, even with this extra guidance, objects can still be blurry especially when they are equally far from both input frames (i.e., halfway in-between), due to the directional ambiguity in long-range motion. To solve this, we propose an iterative reference-based estimation strategy that breaks down a long-range prediction into several short-range steps. When integrating our plug-and-play strategies into state-of-the-art learning-based models, they exhibit markedly sharper outputs and superior perceptual quality in arbitrary time interpolations, using a uniform distance indexing map in the same format as time indexing without requiring extra computation. Furthermore, we demonstrate that if additional latency is acceptable, a continuous map estimator can be employed to compute a pixel-wise dense distance indexing using multiple nearby frames. Combined with efficient multi-frame refinement, this extension can further disambiguate complex motion, thus enhancing performance both qualitatively and quantitatively. Additionally, the ability to manually specify distance indexing allows for independent temporal manipulation of each object, providing a novel tool for video editing tasks such as re-timing. The code is available at https://zzh-tech.github.io/InterpAny-Clearer/.

现有的视频帧插值(VFI)方法盲目地预测每个物体在特定时间步长$t$(“时间索引”)的位置,这很难预测精确的物体运动。给定两幅棒球的图像,就会有无限多种可能的轨迹:加速或减速,直线或弯曲。这通常会导致模糊的帧,因为该方法平均了这些可能性。我们没有强迫网络隐式地学习这种复杂的时间到位置映射以及预测帧,而是为网络提供了一个明确的提示,提示对象在开始帧和结束帧之间移动了多远,这是一种称为“距离索引”的新方法。这种方法为模型提供了一个更清晰的学习目标,减少了与对象速度相关的不确定性。我们进一步观察到,即使有了这个额外的引导,物体仍然可能是模糊的,特别是当它们离两个输入帧同样远的时候(即,中间),由于远距离运动的方向模糊。为了解决这个问题,我们提出了一种基于参考的迭代估计策略,该策略将长期预测分解为几个短期步骤。当将我们的即插即用策略集成到最先进的基于学习的模型中时,它们在任意时间插值中表现出明显更清晰的输出和更好的感知质量,使用与时间索引相同格式的统一距离索引地图,而无需额外的计算。此外,我们证明,如果额外的延迟是可接受的,一个连续的地图估计器可以用来计算一个像素密集的距离索引使用多个附近帧。结合高效的多帧细化,该扩展可以进一步消除复杂运动的歧义,从而提高定性和定量的性能。此外,手动指定距离索引的能力允许对每个对象进行独立的时间操作,为视频编辑任务(如重新计时)提供了一种新颖的工具。代码可在https://zzh-tech.github.io/InterpAny-Clearer/上获得。
{"title":"Velocity Disambiguation for Video Frame Interpolation.","authors":"Zhihang Zhong, Yiming Zhang, Wei Wang, Xiao Sun, Yu Qiao, Gurunandan Krishnan, Sizhuo Ma, Jian Wang","doi":"10.1109/TPAMI.2026.3667437","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3667437","url":null,"abstract":"<p><p>Existing video frame interpolation (VFI) methods blindly predict where each object is at a specific timestep $t$ (\"time indexing\"), which struggles to predict precise object movements. Given two images of a baseball, there are infinitely many possible trajectories: accelerating or decelerating, straight or curved. This often results in blurry frames as the method averages out these possibilities. Instead of forcing the network to learn this complicated time-to-location mapping implicitly together with predicting the frames, we provide the network with an explicit hint on how far the object has traveled between start and end frames, a novel approach termed \"distance indexing\". This method offers a clearer learning goal for models, reducing the uncertainty tied to object speeds. We further observed that, even with this extra guidance, objects can still be blurry especially when they are equally far from both input frames (i.e., halfway in-between), due to the directional ambiguity in long-range motion. To solve this, we propose an iterative reference-based estimation strategy that breaks down a long-range prediction into several short-range steps. When integrating our plug-and-play strategies into state-of-the-art learning-based models, they exhibit markedly sharper outputs and superior perceptual quality in arbitrary time interpolations, using a uniform distance indexing map in the same format as time indexing without requiring extra computation. Furthermore, we demonstrate that if additional latency is acceptable, a continuous map estimator can be employed to compute a pixel-wise dense distance indexing using multiple nearby frames. Combined with efficient multi-frame refinement, this extension can further disambiguate complex motion, thus enhancing performance both qualitatively and quantitatively. Additionally, the ability to manually specify distance indexing allows for independent temporal manipulation of each object, providing a novel tool for video editing tasks such as re-timing. The code is available at https://zzh-tech.github.io/InterpAny-Clearer/.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Neural Networks Powered by Encoder Embedding for Improved Node Learning. 基于编码器嵌入的改进节点学习图神经网络。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3667397
Shiyu Chen, Cencheng Shen, Youngser Park, Carey E Priebe

Graph neural networks (GNNs) have emerged as a powerful framework for a wide range of node-level graph learning tasks. However, their performance typically depends on random or minimally informed initial feature representations, where poor initialization can lead to slower convergence and increased training instability. In this paper, we address this limitation by leveraging a statistically grounded one-hot graph encoder embedding (GEE) as a high-quality, structure-aware initialization for node features. Integrating GEE into standard GNNs yields the GEE-powered GNN (GG) framework. Across extensive simulations and real-world benchmarks, GG provides consistent and substantial performance gains in both unsupervised and supervised settings. For node classification, we further introduce GG-C, which concatenates the outputs of GG and GEE and outperforms competing methods, achieving roughly 10-50% accuracy improvements across most datasets. These results demonstrate the importance of principled, structure-aware initialization for improving the efficiency, stability, and overall performance of graph neural network architecture, enabling models to better exploit graph topology from the outset.

图神经网络(gnn)已经成为广泛的节点级图学习任务的强大框架。然而,它们的性能通常取决于随机或最低限度的初始特征表示,在这种情况下,较差的初始化会导致较慢的收敛速度和增加的训练不稳定性。在本文中,我们通过利用基于统计的单热图编码器嵌入(GEE)作为节点特征的高质量,结构感知初始化来解决这一限制。将GEE集成到标准GNN中产生由GEE驱动的GNN (GG)框架。通过广泛的模拟和现实世界的基准测试,GG在无监督和有监督设置下都提供了一致的、实质性的性能提升。对于节点分类,我们进一步引入GG- c,它将GG和GEE的输出连接起来,并且优于竞争方法,在大多数数据集上实现了大约10-50%的准确率提高。这些结果证明了原则性的、结构感知的初始化对于提高图神经网络架构的效率、稳定性和整体性能的重要性,使模型能够从一开始就更好地利用图拓扑。
{"title":"Graph Neural Networks Powered by Encoder Embedding for Improved Node Learning.","authors":"Shiyu Chen, Cencheng Shen, Youngser Park, Carey E Priebe","doi":"10.1109/TPAMI.2026.3667397","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3667397","url":null,"abstract":"<p><p>Graph neural networks (GNNs) have emerged as a powerful framework for a wide range of node-level graph learning tasks. However, their performance typically depends on random or minimally informed initial feature representations, where poor initialization can lead to slower convergence and increased training instability. In this paper, we address this limitation by leveraging a statistically grounded one-hot graph encoder embedding (GEE) as a high-quality, structure-aware initialization for node features. Integrating GEE into standard GNNs yields the GEE-powered GNN (GG) framework. Across extensive simulations and real-world benchmarks, GG provides consistent and substantial performance gains in both unsupervised and supervised settings. For node classification, we further introduce GG-C, which concatenates the outputs of GG and GEE and outperforms competing methods, achieving roughly 10-50% accuracy improvements across most datasets. These results demonstrate the importance of principled, structure-aware initialization for improving the efficiency, stability, and overall performance of graph neural network architecture, enabling models to better exploit graph topology from the outset.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolving Markov Chains: Online Mode Discovery and Recognition from Data Streams. 进化马尔可夫链:数据流的在线模式发现和识别。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3660046
Kutalmls Coskun, Borahan Tumer, Bjarne C Hiller, Martin Becker

Markov chains are simple yet powerful mathematical structures to model temporally dependent processes. They generally assume stationary data, i.e., fixed transition probabilities between observations/states. However, live, real-world processes, like in the context of activity tracking, biological time series, or industrial monitoring, often switch behavior over time. Such behavior switches can be modeled as transitions between higher-level modes (e.g., running, walking, etc.). Yet all modes are usually not previously known, often exhibit vastly differing transition probabilities, and can switch unpredictably. Thus, to track behavior changes of live, real-world processes, this study proposes an online and efficient method to construct Evolving Markov chains (EMCs). EMCs adaptively track transition probabilities, automatically discover modes, and detect mode switches in an online manner. In contrast to previous work, EMCs are of arbitrary order, the proposed update scheme does not rely on tracking windows, only updates the relevant region of the probability tensor, and enjoys geometric convergence of the expected estimates. Our evaluation of synthetic data and real-world applications on human activity recognition, electric motor condition monitoring, and eye-state recognition from electroencephalography (EEG) measurements illustrates the versatility of the approach and points to the potential of EMCs to efficiently track, model, and understand live, real-world processes.

马尔可夫链是一种简单而强大的数学结构,用于模拟时间相关的过程。它们通常假设平稳数据,即观测值/状态之间的固定转移概率。然而,实时的、真实世界的过程,比如在活动跟踪、生物时间序列或工业监控的上下文中,经常会随着时间的推移而改变行为。这种行为切换可以建模为高级模式之间的转换(例如,跑步,步行等)。然而,所有的模式通常都是未知的,通常表现出截然不同的转变概率,并且可以不可预测地转换。因此,为了跟踪实时、真实世界过程的行为变化,本研究提出了一种在线、高效的方法来构建演化马尔可夫链(EMCs)。EMCs自适应跟踪转换概率,自动发现模式,并在线检测模式切换。与以往的工作相比,EMCs是任意阶的,所提出的更新方案不依赖于跟踪窗口,仅更新概率张量的相关区域,并且具有期望估计的几何收敛性。我们对人类活动识别、电机状态监测和脑电图(EEG)测量的眼态识别的合成数据和现实世界应用的评估说明了该方法的多功能性,并指出了EMCs有效跟踪、建模和理解现实世界过程的潜力。
{"title":"Evolving Markov Chains: Online Mode Discovery and Recognition from Data Streams.","authors":"Kutalmls Coskun, Borahan Tumer, Bjarne C Hiller, Martin Becker","doi":"10.1109/TPAMI.2026.3660046","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3660046","url":null,"abstract":"<p><p>Markov chains are simple yet powerful mathematical structures to model temporally dependent processes. They generally assume stationary data, i.e., fixed transition probabilities between observations/states. However, live, real-world processes, like in the context of activity tracking, biological time series, or industrial monitoring, often switch behavior over time. Such behavior switches can be modeled as transitions between higher-level modes (e.g., running, walking, etc.). Yet all modes are usually not previously known, often exhibit vastly differing transition probabilities, and can switch unpredictably. Thus, to track behavior changes of live, real-world processes, this study proposes an online and efficient method to construct Evolving Markov chains (EMCs). EMCs adaptively track transition probabilities, automatically discover modes, and detect mode switches in an online manner. In contrast to previous work, EMCs are of arbitrary order, the proposed update scheme does not rely on tracking windows, only updates the relevant region of the probability tensor, and enjoys geometric convergence of the expected estimates. Our evaluation of synthetic data and real-world applications on human activity recognition, electric motor condition monitoring, and eye-state recognition from electroencephalography (EEG) measurements illustrates the versatility of the approach and points to the potential of EMCs to efficiently track, model, and understand live, real-world processes.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Causal Discovery with Background Knowledge. 基于背景知识的局部因果发现。
IF 18.6 Pub Date : 2026-02-23 DOI: 10.1109/TPAMI.2026.3667409
Qingyuan Zheng, Yue Liu, Yangbo He

Causality plays a pivotal role in various fields of study. Based on the framework of causal graphical models, previous works have proposed identifying whether a variable is a cause or non-cause of another variable in every Markov equivalent graph by learning only the local structure. However, the presence of prior knowledge, often represented as a partially known causal graph, is common in many causal modeling applications. Leveraging this prior knowledge enables further identification of causal relations. In this paper, we first propose a method for learning the local structure by incorporating several types of causal background knowledge, including direct causal, non-ancestral, and ancestral information. Then we introduce sufficient and necessary conditions for identifying causal relations based solely on the local structure in the presence of prior knowledge. The effectiveness and efficiency of our method are demonstrated through experiments on local structure learning, causal relation identification, and its application to fair machine learning.

因果关系在各个研究领域都起着举足轻重的作用。基于因果图模型的框架,以前的工作已经提出通过只学习局部结构来识别每个马尔可夫等价图中一个变量是另一个变量的原因还是非原因。然而,先验知识的存在,通常表示为部分已知的因果图,在许多因果建模应用中很常见。利用这种先验知识可以进一步确定因果关系。在本文中,我们首先提出了一种结合几种类型的因果背景知识(包括直接因果、非祖先和祖先信息)来学习局部结构的方法。然后,我们引入了在存在先验知识的情况下仅基于局部结构识别因果关系的充要条件。通过局部结构学习、因果关系识别及其在公平机器学习中的应用实验,证明了我们方法的有效性和效率。
{"title":"Local Causal Discovery with Background Knowledge.","authors":"Qingyuan Zheng, Yue Liu, Yangbo He","doi":"10.1109/TPAMI.2026.3667409","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3667409","url":null,"abstract":"<p><p>Causality plays a pivotal role in various fields of study. Based on the framework of causal graphical models, previous works have proposed identifying whether a variable is a cause or non-cause of another variable in every Markov equivalent graph by learning only the local structure. However, the presence of prior knowledge, often represented as a partially known causal graph, is common in many causal modeling applications. Leveraging this prior knowledge enables further identification of causal relations. In this paper, we first propose a method for learning the local structure by incorporating several types of causal background knowledge, including direct causal, non-ancestral, and ancestral information. Then we introduce sufficient and necessary conditions for identifying causal relations based solely on the local structure in the presence of prior knowledge. The effectiveness and efficiency of our method are demonstrated through experiments on local structure learning, causal relation identification, and its application to fair machine learning.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Adversarial Transferability of Generalized "Skip Connections". 关于广义“跳跃连接”的对抗性可转移性。
IF 18.6 Pub Date : 2026-02-18 DOI: 10.1109/TPAMI.2026.3666165
Yisen Wang, Yichuan Mo, Dongxian Wu, Mingjie Li, Xingjun Ma, Zhouchen Lin

Skip connection is an essential ingredient for modern deep models to be deeper and more powerful. Despite their huge success in normal scenarios (state-of-the-art classification performance on natural examples), we investigate and identify an interesting property of skip connections under adversarial scenarios, namely, the use of skip connections allows easier generation of highly transferable adversarial examples. Specifically, in ResNet-like models (with skip connections), we find that biasing backpropagation to favor gradients from skip connections-while suppressing those from residual modules via a decay factor-allows one to craft adversarial examples with high transferability. Based on this insight, we propose the Skip Gradient Method (SGM). Although starting from ResNet-like models in vision domains, we further extend SGM to more advanced architectures, including Vision Transformers (ViTs), models with varying-length paths, and other domains such as natural language processing. We conduct comprehensive transfer-based attacks against diverse model families, including ResNets, Transformers, Inceptions, Neural Architecture Search-based models, and Large Language Models (LLMs). The results demonstrate that employing SGM can greatly improve the transferability of crafted attacks in almost all cases. Furthermore, we demonstrate that SGM can still be effective under more challenging settings such as ensemble-based attacks, targeted attacks, and against defense equipped models. At last, we provide theoretical explanations and empirical insights on how SGM works. Our findings not only motivate new adversarial research into the architectural characteristics of models but also open up further challenges for secure model architecture design.

箕斗连接是现代深层模型更深,更强大的重要组成部分。尽管它们在正常情况下取得了巨大的成功(在自然示例上的最先进分类性能),但我们研究并确定了跳跃连接在对抗场景下的一个有趣特性,即,使用跳跃连接可以更容易地生成高度可转移的对抗示例。具体来说,在类似resnet的模型(带有跳跃连接)中,我们发现偏倚的反向传播有利于跳跃连接的梯度,同时通过衰减因子抑制来自剩余模块的梯度,从而可以制作具有高可转移性的对抗性示例。基于此,我们提出了跳跃梯度法(Skip Gradient Method, SGM)。虽然从视觉领域的类resnet模型开始,我们进一步将SGM扩展到更高级的体系结构,包括视觉变形器(vit),具有变长路径的模型,以及其他领域,如自然语言处理。我们对各种模型族进行全面的基于转移的攻击,包括ResNets, Transformers, inception,基于神经结构搜索的模型和大型语言模型(llm)。结果表明,在几乎所有情况下,采用SGM可以大大提高精心设计的攻击的可移植性。此外,我们证明了SGM在更具挑战性的环境下仍然有效,例如基于集成的攻击、目标攻击和防御装备模型。最后,本文对SGM的作用机制进行了理论解释和实证分析。我们的发现不仅激发了对模型架构特征的新的对抗性研究,而且还为安全模型架构设计开辟了进一步的挑战。
{"title":"On the Adversarial Transferability of Generalized \"Skip Connections\".","authors":"Yisen Wang, Yichuan Mo, Dongxian Wu, Mingjie Li, Xingjun Ma, Zhouchen Lin","doi":"10.1109/TPAMI.2026.3666165","DOIUrl":"https://doi.org/10.1109/TPAMI.2026.3666165","url":null,"abstract":"<p><p>Skip connection is an essential ingredient for modern deep models to be deeper and more powerful. Despite their huge success in normal scenarios (state-of-the-art classification performance on natural examples), we investigate and identify an interesting property of skip connections under adversarial scenarios, namely, the use of skip connections allows easier generation of highly transferable adversarial examples. Specifically, in ResNet-like models (with skip connections), we find that biasing backpropagation to favor gradients from skip connections-while suppressing those from residual modules via a decay factor-allows one to craft adversarial examples with high transferability. Based on this insight, we propose the Skip Gradient Method (SGM). Although starting from ResNet-like models in vision domains, we further extend SGM to more advanced architectures, including Vision Transformers (ViTs), models with varying-length paths, and other domains such as natural language processing. We conduct comprehensive transfer-based attacks against diverse model families, including ResNets, Transformers, Inceptions, Neural Architecture Search-based models, and Large Language Models (LLMs). The results demonstrate that employing SGM can greatly improve the transferability of crafted attacks in almost all cases. Furthermore, we demonstrate that SGM can still be effective under more challenging settings such as ensemble-based attacks, targeted attacks, and against defense equipped models. At last, we provide theoretical explanations and empirical insights on how SGM works. Our findings not only motivate new adversarial research into the architectural characteristics of models but also open up further challenges for secure model architecture design.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146222671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1