首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
Learning Signed Hyper Surfaces for Oriented Point Cloud Normal Estimation. 为定向点云法线估计学习签名超曲面
Pub Date : 2024-07-19 DOI: 10.1109/TPAMI.2024.3431221
Qing Li, Huifang Feng, Kanle Shi, Yue Gao, Yi Fang, Yu-Shen Liu, Zhizhong Han

We propose a novel method called SHS-Net for point cloud normal estimation by learning signed hyper surfaces, which can accurately predict normals with global consistent orientation from various point clouds. Almost all existing methods estimate oriented normals through a two-stage pipeline, i.e., unoriented normal estimation and normal orientation, and each step is implemented by a separate algorithm. However, previous methods are sensitive to parameter settings, resulting in poor results from point clouds with noise, density variations and complex geometries. In this work, we introduce signed hyper surfaces (SHS), which are parameterized by multi-layer perceptron (MLP) layers, to learn to estimate oriented normals from point clouds in an end-to-end manner. The signed hyper surfaces are implicitly learned in a high-dimensional feature space where the local and global information is aggregated. Specifically, we introduce a patch encoding module and a shape encoding module to encode a 3D point cloud into a local latent code and a global latent code, respectively. Then, an attention-weighted normal prediction module is proposed as a decoder, which takes the local and global latent codes as input to predict oriented normals. Experimental results show that our algorithm outperforms the state-of-the-art methods in both unoriented and oriented normal estimation.

我们提出了一种名为 SHS-Net 的新方法,通过学习有符号的超曲面来估计点云法线,该方法可以从各种点云中准确预测具有全局一致方向的法线。几乎所有现有方法都是通过两阶段管道(即无方向法线估算和法线定向)来估算有方向的法线,每一步都由单独的算法实现。然而,以前的方法对参数设置很敏感,导致在有噪声、密度变化和复杂几何形状的点云中效果不佳。在这项工作中,我们引入了有符号的超曲面(SHS),通过多层感知器(MLP)层进行参数设置,以端到端方式学习估计点云的方向法线。有符号的超曲面是在高维特征空间中隐含学习的,其中汇聚了局部和全局信息。具体来说,我们引入了补丁编码模块和形状编码模块,将三维点云分别编码为局部潜码和全局潜码。然后,我们提出了一个注意力加权法线预测模块作为解码器,该模块将局部潜码和全局潜码作为输入来预测定向法线。实验结果表明,我们的算法在非定向和定向法线估计方面都优于最先进的方法。
{"title":"Learning Signed Hyper Surfaces for Oriented Point Cloud Normal Estimation.","authors":"Qing Li, Huifang Feng, Kanle Shi, Yue Gao, Yi Fang, Yu-Shen Liu, Zhizhong Han","doi":"10.1109/TPAMI.2024.3431221","DOIUrl":"10.1109/TPAMI.2024.3431221","url":null,"abstract":"<p><p>We propose a novel method called SHS-Net for point cloud normal estimation by learning signed hyper surfaces, which can accurately predict normals with global consistent orientation from various point clouds. Almost all existing methods estimate oriented normals through a two-stage pipeline, i.e., unoriented normal estimation and normal orientation, and each step is implemented by a separate algorithm. However, previous methods are sensitive to parameter settings, resulting in poor results from point clouds with noise, density variations and complex geometries. In this work, we introduce signed hyper surfaces (SHS), which are parameterized by multi-layer perceptron (MLP) layers, to learn to estimate oriented normals from point clouds in an end-to-end manner. The signed hyper surfaces are implicitly learned in a high-dimensional feature space where the local and global information is aggregated. Specifically, we introduce a patch encoding module and a shape encoding module to encode a 3D point cloud into a local latent code and a global latent code, respectively. Then, an attention-weighted normal prediction module is proposed as a decoder, which takes the local and global latent codes as input to predict oriented normals. Experimental results show that our algorithm outperforms the state-of-the-art methods in both unoriented and oriented normal estimation.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gradient Inversion Attacks: Impact Factors Analyses and Privacy Enhancement. 梯度反转攻击:影响因素分析与隐私增强
Pub Date : 2024-07-18 DOI: 10.1109/TPAMI.2024.3430533
Zipeng Ye, Wenjian Luo, Qi Zhou, Zhenqian Zhu, Yuhui Shi, Yan Jia

Gradient inversion attacks (GIAs) have posed significant challenges to the emerging paradigm of distributed learning, which aims to reconstruct the private training data of clients (participating parties in distributed training) through the shared parameters. For counteracting GIAs, a large number of privacy-preserving methods for distributed learning scenario have emerged. However, these methods have significant limitations, either compromising the usability of global model or consuming substantial additional computational resources. Furthermore, despite the extensive efforts dedicated to defense methods, the underlying causes of data leakage in distributed learning still have not been thoroughly investigated. Therefore, this paper tries to reveal the potential reasons behind the successful implementation of existing GIAs, explore variations in the robustness of models against GIAs during the training process, and investigate the impact of different model structures on attack performance. After these explorations and analyses, this paper propose a plug-and-play GIAs defense method, which augments the training data by a designed vicinal distribution. Sufficient empirical experiments demonstrate that this easy-toimplement method can ensure the basic level of privacy without compromising the usability of global model.

梯度反转攻击(Gradient Inversion Attack,GIAs)对新兴的分布式学习范式提出了重大挑战,该范式旨在通过共享参数重建客户端(参与分布式训练的各方)的隐私训练数据。为应对 GIA,出现了大量针对分布式学习场景的隐私保护方法。然而,这些方法都有很大的局限性,要么会影响全局模型的可用性,要么会消耗大量额外的计算资源。此外,尽管人们在防御方法上做了大量努力,但分布式学习中数据泄漏的根本原因仍未得到深入研究。因此,本文试图揭示现有 GIA 成功实施背后的潜在原因,探索模型在训练过程中对 GIA 的鲁棒性变化,并研究不同模型结构对攻击性能的影响。经过这些探索和分析,本文提出了一种即插即用的 GIAs 防御方法,即通过设计的邻域分布来增强训练数据。充分的实证实验证明,这种易于实现的方法可以在不影响全局模型可用性的前提下确保基本的隐私水平。
{"title":"Gradient Inversion Attacks: Impact Factors Analyses and Privacy Enhancement.","authors":"Zipeng Ye, Wenjian Luo, Qi Zhou, Zhenqian Zhu, Yuhui Shi, Yan Jia","doi":"10.1109/TPAMI.2024.3430533","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3430533","url":null,"abstract":"<p><p>Gradient inversion attacks (GIAs) have posed significant challenges to the emerging paradigm of distributed learning, which aims to reconstruct the private training data of clients (participating parties in distributed training) through the shared parameters. For counteracting GIAs, a large number of privacy-preserving methods for distributed learning scenario have emerged. However, these methods have significant limitations, either compromising the usability of global model or consuming substantial additional computational resources. Furthermore, despite the extensive efforts dedicated to defense methods, the underlying causes of data leakage in distributed learning still have not been thoroughly investigated. Therefore, this paper tries to reveal the potential reasons behind the successful implementation of existing GIAs, explore variations in the robustness of models against GIAs during the training process, and investigate the impact of different model structures on attack performance. After these explorations and analyses, this paper propose a plug-and-play GIAs defense method, which augments the training data by a designed vicinal distribution. Sufficient empirical experiments demonstrate that this easy-toimplement method can ensure the basic level of privacy without compromising the usability of global model.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical Adversarial Attack Meets Computer Vision: A Decade Survey. 物理对抗攻击与计算机视觉:十年调查
Pub Date : 2024-07-18 DOI: 10.1109/TPAMI.2024.3430860
Hui Wei, Hao Tang, Xuemei Jia, Zhixiang Wang, Hanxun Yu, Zhubo Li, Shinichi Satoh, Luc Van Gool, Zheng Wang

Despite the impressive achievements of Deep Neural Networks (DNNs) in computer vision, their vulnerability to adversarial attacks remains a critical concern. Extensive research has demonstrated that incorporating sophisticated perturbations into input images can lead to a catastrophic degradation in DNNs' performance. This perplexing phenomenon not only exists in the digital space but also in the physical world. Consequently, it becomes imperative to evaluate the security of DNNs-based systems to ensure their safe deployment in real-world scenarios, particularly in security-sensitive applications. To facilitate a profound understanding of this topic, this paper presents a comprehensive overview of physical adversarial attacks. Firstly, we distill four general steps for launching physical adversarial attacks. Building upon this foundation, we uncover the pervasive role of artifacts carrying adversarial perturbations in the physical world. These artifacts influence each step. To denote them, we introduce a new term: adversarial medium. Then, we take the first step to systematically evaluate the performance of physical adversarial attacks, taking the adversarial medium as a first attempt. Our proposed evaluation metric, hiPAA, comprises six perspectives: Effectiveness, Stealthiness, Robustness, Practicability, Aesthetics, and Economics. We also provide comparative results across task categories, together with insightful observations and suggestions for future research directions.

尽管深度神经网络(DNN)在计算机视觉领域取得了令人瞩目的成就,但其易受对抗性攻击的弱点仍然是一个令人严重关切的问题。大量研究表明,在输入图像中加入复杂的扰动会导致 DNN 性能的灾难性下降。这种令人困惑的现象不仅存在于数字空间,也存在于物理世界。因此,当务之急是评估基于 DNNs 的系统的安全性,以确保其在现实世界场景中的安全部署,尤其是在对安全敏感的应用中。为了促进对这一主题的深刻理解,本文全面概述了物理对抗攻击。首先,我们提炼了发起物理对抗攻击的四个一般步骤。在此基础上,我们揭示了物理世界中携带对抗性扰动的人工制品的普遍作用。这些人工制品影响着每个步骤。为了表示它们,我们引入了一个新术语:对抗性介质。然后,我们迈出了系统评估物理对抗攻击性能的第一步,并将对抗介质作为首次尝试。我们提出的评价指标--hiPAA--包括六个方面:有效性、隐蔽性、稳健性、实用性、美观性和经济性。我们还提供了不同任务类别的比较结果,以及对未来研究方向的深刻见解和建议。
{"title":"Physical Adversarial Attack Meets Computer Vision: A Decade Survey.","authors":"Hui Wei, Hao Tang, Xuemei Jia, Zhixiang Wang, Hanxun Yu, Zhubo Li, Shinichi Satoh, Luc Van Gool, Zheng Wang","doi":"10.1109/TPAMI.2024.3430860","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3430860","url":null,"abstract":"<p><p>Despite the impressive achievements of Deep Neural Networks (DNNs) in computer vision, their vulnerability to adversarial attacks remains a critical concern. Extensive research has demonstrated that incorporating sophisticated perturbations into input images can lead to a catastrophic degradation in DNNs' performance. This perplexing phenomenon not only exists in the digital space but also in the physical world. Consequently, it becomes imperative to evaluate the security of DNNs-based systems to ensure their safe deployment in real-world scenarios, particularly in security-sensitive applications. To facilitate a profound understanding of this topic, this paper presents a comprehensive overview of physical adversarial attacks. Firstly, we distill four general steps for launching physical adversarial attacks. Building upon this foundation, we uncover the pervasive role of artifacts carrying adversarial perturbations in the physical world. These artifacts influence each step. To denote them, we introduce a new term: adversarial medium. Then, we take the first step to systematically evaluate the performance of physical adversarial attacks, taking the adversarial medium as a first attempt. Our proposed evaluation metric, hiPAA, comprises six perspectives: Effectiveness, Stealthiness, Robustness, Practicability, Aesthetics, and Economics. We also provide comparative results across task categories, together with insightful observations and suggestions for future research directions.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STQD-Det: Spatio-Temporal Quantum Diffusion Model for Real-time Coronary Stenosis Detection in X-ray Angiography. STQD-Det:用于 X 射线血管造影术中实时冠状动脉狭窄检测的时空量子扩散模型。
Pub Date : 2024-07-18 DOI: 10.1109/TPAMI.2024.3430839
Xinyu Li, Danni Ai, Hong Song, Jingfan Fan, Tianyu Fu, Deqiang Xiao, Yining Wang, Jian Yang

Detecting coronary stenosis accurately in X-ray angiography (XRA) is important for diagnosing and treating coronary artery disease (CAD). However, challenges arise from factors like breathing and heart motion, poor imaging quality, and the complex vascular structures, making it difficult to identify stenosis fast and precisely. In this study, we proposed a Quantum Diffusion Model with Spatio-Temporal Feature Sharing to Real-time detect Stenosis (STQD-Det). Our framework consists of two modules: Sequential Quantum Noise Boxes module and spatio-temporal feature module. To evaluate the effectiveness of the method, we conducted a 4-fold cross-validation using a dataset consisting of 233 XRA sequences. Our approach achieved the F1 score of 92.39% with a real-time processing speed of 25.08 frames per second. These results outperform 17 state-of-the-art methods. The experimental results show that the proposed method can accomplish the stenosis detection quickly and accurately.

在 X 射线血管造影(XRA)中准确检测冠状动脉狭窄对于诊断和治疗冠状动脉疾病(CAD)非常重要。然而,由于呼吸和心脏运动、成像质量差以及血管结构复杂等因素,很难快速准确地识别狭窄。在这项研究中,我们提出了一种具有时空特征共享功能的量子扩散模型来实时检测血管狭窄(STQD-Det)。我们的框架由两个模块组成:序列量子噪声盒模块和时空特征模块。为了评估该方法的有效性,我们使用由 233 个 XRA 序列组成的数据集进行了 4 倍交叉验证。我们的方法获得了 92.39% 的 F1 分数,实时处理速度为每秒 25.08 帧。这些结果优于 17 种最先进的方法。实验结果表明,所提出的方法可以快速、准确地完成血管狭窄检测。
{"title":"STQD-Det: Spatio-Temporal Quantum Diffusion Model for Real-time Coronary Stenosis Detection in X-ray Angiography.","authors":"Xinyu Li, Danni Ai, Hong Song, Jingfan Fan, Tianyu Fu, Deqiang Xiao, Yining Wang, Jian Yang","doi":"10.1109/TPAMI.2024.3430839","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3430839","url":null,"abstract":"<p><p>Detecting coronary stenosis accurately in X-ray angiography (XRA) is important for diagnosing and treating coronary artery disease (CAD). However, challenges arise from factors like breathing and heart motion, poor imaging quality, and the complex vascular structures, making it difficult to identify stenosis fast and precisely. In this study, we proposed a Quantum Diffusion Model with Spatio-Temporal Feature Sharing to Real-time detect Stenosis (STQD-Det). Our framework consists of two modules: Sequential Quantum Noise Boxes module and spatio-temporal feature module. To evaluate the effectiveness of the method, we conducted a 4-fold cross-validation using a dataset consisting of 233 XRA sequences. Our approach achieved the F1 score of 92.39% with a real-time processing speed of 25.08 frames per second. These results outperform 17 state-of-the-art methods. The experimental results show that the proposed method can accomplish the stenosis detection quickly and accurately.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-Guided Low-Rank Tensor Completion. 注意力引导的低张量补全
Pub Date : 2024-07-17 DOI: 10.1109/TPAMI.2024.3429498
Truong Thanh Nhat Mai, Edmund Y Lam, Chul Lee

Low-rank tensor completion (LRTC) aims to recover missing data of high-dimensional structures from a limited set of observed entries. Despite recent significant successes, the original structures of data tensors are still not effectively preserved in LRTC algorithms, yielding less accurate restoration results. Moreover, LRTC algorithms often incur high computational costs, which hinder their applicability. In this work, we propose an attention-guided low-rank tensor completion (AGTC) algorithm, which can faithfully restore the original structures of data tensors using deep unfolding attention-guided tensor factorization. First, we formulate the LRTC task as a robust factorization problem based on low-rank and sparse error assumptions. Low-rank tensor recovery is guided by an attention mechanism to better preserve the structures of the original data. We also develop implicit regularizers to compensate for modeling inaccuracies. Then, we solve the optimization problem by employing an iterative technique. Finally, we design a multistage deep network by unfolding the iterative algorithm, where each stage corresponds to an iteration of the algorithm; at each stage, the optimization variables and regularizers are updated by closed-form solutions and learned deep networks, respectively. Experimental results for high dynamic range imaging and hyperspectral image restoration show that the proposed algorithm outperforms state-of-the-art algorithms.

低秩张量补全(LRTC)旨在从一组有限的观察条目中恢复高维结构的缺失数据。尽管最近取得了重大成就,但低秩张量补全算法仍不能有效保留数据张量的原始结构,导致恢复结果不够准确。此外,LRTC 算法通常会产生较高的计算成本,这也阻碍了其适用性。在这项工作中,我们提出了一种注意力引导的低秩张量补全(AGTC)算法,它能利用深度展开注意力引导的张量因式分解忠实地还原数据张量的原始结构。首先,我们将 LRTC 任务表述为基于低阶和稀疏误差假设的鲁棒因式分解问题。低阶张量恢复由注意力机制引导,以更好地保留原始数据的结构。我们还开发了隐式正则,以补偿建模的不准确性。然后,我们采用迭代技术解决优化问题。最后,我们通过展开迭代算法设计了一个多阶段深度网络,每个阶段对应算法的一次迭代;在每个阶段,优化变量和正则器分别由闭式解和学习的深度网络更新。高动态范围成像和高光谱图像复原的实验结果表明,所提出的算法优于最先进的算法。
{"title":"Attention-Guided Low-Rank Tensor Completion.","authors":"Truong Thanh Nhat Mai, Edmund Y Lam, Chul Lee","doi":"10.1109/TPAMI.2024.3429498","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3429498","url":null,"abstract":"<p><p>Low-rank tensor completion (LRTC) aims to recover missing data of high-dimensional structures from a limited set of observed entries. Despite recent significant successes, the original structures of data tensors are still not effectively preserved in LRTC algorithms, yielding less accurate restoration results. Moreover, LRTC algorithms often incur high computational costs, which hinder their applicability. In this work, we propose an attention-guided low-rank tensor completion (AGTC) algorithm, which can faithfully restore the original structures of data tensors using deep unfolding attention-guided tensor factorization. First, we formulate the LRTC task as a robust factorization problem based on low-rank and sparse error assumptions. Low-rank tensor recovery is guided by an attention mechanism to better preserve the structures of the original data. We also develop implicit regularizers to compensate for modeling inaccuracies. Then, we solve the optimization problem by employing an iterative technique. Finally, we design a multistage deep network by unfolding the iterative algorithm, where each stage corresponds to an iteration of the algorithm; at each stage, the optimization variables and regularizers are updated by closed-form solutions and learned deep networks, respectively. Experimental results for high dynamic range imaging and hyperspectral image restoration show that the proposed algorithm outperforms state-of-the-art algorithms.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141636247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surface Reconstruction from Point Clouds: A Survey and a Benchmark. 从点云重建地表:调查与基准
Pub Date : 2024-07-16 DOI: 10.1109/TPAMI.2024.3429209
ZhangJin Huang, Yuxin Wen, ZiHao Wang, Jinjuan Ren, Kui Jia

Reconstruction of a continuous surface of two-dimensional manifold from its raw, discrete point cloud observation is a long-standing problem in computer vision and graphics research. The problem is technically ill-posed, and becomes more difficult considering that various sensing imperfections would appear in the point clouds obtained by practical depth scanning. In literature, a rich set of methods has been proposed, and reviews of existing methods are also provided. However, existing reviews are short of thorough investigations on a common benchmark. The present paper aims to review and benchmark existing methods in the new era of deep learning surface reconstruction. To this end, we contribute a large-scale benchmarking dataset consisting of both synthetic and real-scanned data; the benchmark includes object- and scene-level surfaces and takes into account various sensing imperfections that are commonly encountered in practical depth scanning. We conduct thorough empirical studies by comparing existing methods on the constructed benchmark, and pay special attention on robustness of existing methods against various scanning imperfections; we also study how different methods generalize in terms of reconstructing complex surface shapes. Our studies help identity the best conditions under which different methods work, and suggest some empirical findings. For example, while deep learning methods are increasingly popular in the research community, our systematic studies suggest that, surprisingly, a few classical methods perform even better in terms of both robustness and generalization; our studies also suggest that the practical challenges of misalignment of point sets from multi-view scanning, missing of surface points, and point outliers remain unsolved by all the existing surface reconstruction methods. We expect that the benchmark and our studies would be valuable both for practitioners and as a guidance for new innovations in future research. We make the benchmark publicly accessible at https://Gorilla-Lab-SCUT.github.io/SurfaceReconstructionBenchmark.

从原始的离散点云观测结果重建二维流形的连续曲面,是计算机视觉和图形学研究中一个长期存在的问题。这个问题在技术上是个难题,考虑到实际深度扫描获得的点云中会出现各种传感缺陷,这个问题变得更加困难。文献中提出了一系列丰富的方法,并对现有方法进行了综述。然而,现有的综述缺乏对通用基准的深入研究。本文旨在对深度学习表面重建新时代的现有方法进行回顾和基准测试。为此,我们提供了一个由合成数据和真实扫描数据组成的大规模基准数据集;该基准包括物体级和场景级表面,并考虑了实际深度扫描中常见的各种传感缺陷。我们通过在构建的基准上比较现有方法来进行全面的实证研究,并特别关注现有方法对各种扫描缺陷的鲁棒性;我们还研究了不同方法在重建复杂表面形状方面的通用性。我们的研究有助于确定不同方法发挥作用的最佳条件,并提出了一些经验结论。例如,虽然深度学习方法在研究界越来越受欢迎,但我们的系统研究表明,令人惊讶的是,一些经典方法在鲁棒性和泛化方面的表现甚至更好;我们的研究还表明,现有的所有曲面重建方法仍无法解决多视角扫描点集错位、曲面点缺失和点异常值等实际难题。我们希望该基准和我们的研究不仅对从业人员有价值,还能为未来研究的创新提供指导。我们在 https://Gorilla-Lab-SCUT.github.io/SurfaceReconstructionBenchmark 网站上公开了该基准。
{"title":"Surface Reconstruction from Point Clouds: A Survey and a Benchmark.","authors":"ZhangJin Huang, Yuxin Wen, ZiHao Wang, Jinjuan Ren, Kui Jia","doi":"10.1109/TPAMI.2024.3429209","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3429209","url":null,"abstract":"<p><p>Reconstruction of a continuous surface of two-dimensional manifold from its raw, discrete point cloud observation is a long-standing problem in computer vision and graphics research. The problem is technically ill-posed, and becomes more difficult considering that various sensing imperfections would appear in the point clouds obtained by practical depth scanning. In literature, a rich set of methods has been proposed, and reviews of existing methods are also provided. However, existing reviews are short of thorough investigations on a common benchmark. The present paper aims to review and benchmark existing methods in the new era of deep learning surface reconstruction. To this end, we contribute a large-scale benchmarking dataset consisting of both synthetic and real-scanned data; the benchmark includes object- and scene-level surfaces and takes into account various sensing imperfections that are commonly encountered in practical depth scanning. We conduct thorough empirical studies by comparing existing methods on the constructed benchmark, and pay special attention on robustness of existing methods against various scanning imperfections; we also study how different methods generalize in terms of reconstructing complex surface shapes. Our studies help identity the best conditions under which different methods work, and suggest some empirical findings. For example, while deep learning methods are increasingly popular in the research community, our systematic studies suggest that, surprisingly, a few classical methods perform even better in terms of both robustness and generalization; our studies also suggest that the practical challenges of misalignment of point sets from multi-view scanning, missing of surface points, and point outliers remain unsolved by all the existing surface reconstruction methods. We expect that the benchmark and our studies would be valuable both for practitioners and as a guidance for new innovations in future research. We make the benchmark publicly accessible at https://Gorilla-Lab-SCUT.github.io/SurfaceReconstructionBenchmark.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-Centric Transformer for Domain Adaptive Action Recognition. 以人为本的领域自适应动作识别转换器
Pub Date : 2024-07-16 DOI: 10.1109/TPAMI.2024.3429387
Kun-Yu Lin, Jiaming Zhou, Wei-Shi Zheng

We study the domain adaptation task for action recognition, namely domain adaptive action recognition, which aims to effectively transfer action recognition power from a label-sufficient source domain to a label-free target domain. Since actions are performed by humans, it is crucial to exploit human cues in videos when recognizing actions across domains. However, existing methods are prone to losing human cues but prefer to exploit the correlation between non-human contexts and associated actions for recognition, and the contexts of interest agnostic to actions would reduce recognition performance in the target domain. To overcome this problem, we focus on uncovering human-centric action cues for domain adaptive action recognition, and our conception is to investigate two aspects of human-centric action cues, namely human cues and human-context interaction cues. Accordingly, our proposed Human-Centric Transformer (HCTransformer) develops a decoupled human-centric learning paradigm to explicitly concentrate on human-centric action cues in domain-variant video feature learning. Our HCTransformer first conducts human-aware temporal modeling by a human encoder, aiming to avoid a loss of human cues during domain-invariant video feature learning. Then, by a Transformer-like architecture, HCTransformer exploits domain-invariant and action-correlated contexts by a context encoder, and further models domain-invariant interaction between humans and action-correlated contexts. We conduct extensive experiments on three benchmarks, namely UCF-HMDB, Kinetics-NecDrone and EPIC-Kitchens-UDA, and the state-of-the-art performance demonstrates the effectiveness of our proposed HCTransformer.

我们研究了动作识别的域适应任务,即域自适应动作识别,其目的是有效地将动作识别能力从标签充足的源域转移到无标签的目标域。由于动作是由人类完成的,因此在跨域识别动作时,利用视频中的人类线索至关重要。然而,现有方法容易丢失人类线索,却偏向于利用非人类情境与相关动作之间的关联性进行识别,而与动作无关的兴趣情境会降低目标域的识别性能。为了克服这一问题,我们专注于为领域自适应动作识别挖掘以人为中心的动作线索,我们的构想是研究以人为中心的动作线索的两个方面,即人的线索和人与上下文交互线索。因此,我们提出的 "以人为中心的转换器"(HCTransformer)开发了一种解耦的以人为中心的学习范式,在领域变异视频特征学习中明确专注于以人为中心的动作线索。我们的 HCTransformer 首先由人类编码器进行人类感知时序建模,旨在避免在领域不变视频特征学习过程中丢失人类线索。然后,HCTransformer 采用类似于 Transformer 的架构,通过上下文编码器利用域不变和动作相关的上下文,并进一步模拟人类与动作相关上下文之间的域不变交互。我们在 UCF-HMDB、Kinetics-NecDrone 和 EPIC-Kitchens-UDA 这三个基准上进行了广泛的实验,其一流的性能证明了我们提出的 HCTransformer 的有效性。
{"title":"Human-Centric Transformer for Domain Adaptive Action Recognition.","authors":"Kun-Yu Lin, Jiaming Zhou, Wei-Shi Zheng","doi":"10.1109/TPAMI.2024.3429387","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3429387","url":null,"abstract":"<p><p>We study the domain adaptation task for action recognition, namely domain adaptive action recognition, which aims to effectively transfer action recognition power from a label-sufficient source domain to a label-free target domain. Since actions are performed by humans, it is crucial to exploit human cues in videos when recognizing actions across domains. However, existing methods are prone to losing human cues but prefer to exploit the correlation between non-human contexts and associated actions for recognition, and the contexts of interest agnostic to actions would reduce recognition performance in the target domain. To overcome this problem, we focus on uncovering human-centric action cues for domain adaptive action recognition, and our conception is to investigate two aspects of human-centric action cues, namely human cues and human-context interaction cues. Accordingly, our proposed Human-Centric Transformer (HCTransformer) develops a decoupled human-centric learning paradigm to explicitly concentrate on human-centric action cues in domain-variant video feature learning. Our HCTransformer first conducts human-aware temporal modeling by a human encoder, aiming to avoid a loss of human cues during domain-invariant video feature learning. Then, by a Transformer-like architecture, HCTransformer exploits domain-invariant and action-correlated contexts by a context encoder, and further models domain-invariant interaction between humans and action-correlated contexts. We conduct extensive experiments on three benchmarks, namely UCF-HMDB, Kinetics-NecDrone and EPIC-Kitchens-UDA, and the state-of-the-art performance demonstrates the effectiveness of our proposed HCTransformer.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Class-Incremental Learning: A Survey. 班级强化学习:调查。
Pub Date : 2024-07-16 DOI: 10.1109/TPAMI.2024.3429383
Da-Wei Zhou, Qi-Wei Wang, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu

Deep models, e.g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world. However, novel classes emerge from time to time in our ever-changing world, requiring a learning system to acquire new knowledge continually. Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally and build a universal classifier among all seen classes. Correspondingly, when directly training the model with new class instances, a fatal problem occurs - the model tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades. There have been numerous efforts to tackle catastrophic forgetting in the machine learning community. In this paper, we survey comprehensively recent advances in class-incremental learning and summarize these methods from several aspects. We also provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms empirically. Furthermore, we notice that the current comparison protocol ignores the influence of memory budget in model storage, which may result in unfair comparison and biased results. Hence, we advocate fair comparison by aligning the memory budget in evaluation, as well as several memory-agnostic performance measures. The source code is available at https://github.com/zhoudw-zdw/CIL_Survey/.

深度模型,例如 CNN 和视觉变换器,在封闭世界的许多视觉任务中都取得了令人瞩目的成就。然而,在我们瞬息万变的世界中,新的类别不时出现,这就要求学习系统不断获取新知识。类别递增学习(CIL)使学习者能够逐步吸收新类别的知识,并在所有看到的类别中建立一个通用分类器。与此相对应的是,如果直接用新的类实例来训练模型,就会出现一个致命的问题--模型往往会灾难性地遗忘以前类的特征,其性能也会急剧下降。机器学习界已经为解决灾难性遗忘问题做出了许多努力。在本文中,我们全面考察了类递增学习的最新进展,并从几个方面对这些方法进行了总结。我们还在基准图像分类任务中对 17 种方法进行了严格而统一的评估,从而从经验上找出不同算法的特点。此外,我们还注意到,目前的比较方案忽略了模型存储中内存预算的影响,这可能会导致比较不公平和结果有偏差。因此,我们主张通过在评估中调整内存预算以及几种与内存无关的性能指标来进行公平比较。源代码见 https://github.com/zhoudw-zdw/CIL_Survey/。
{"title":"Class-Incremental Learning: A Survey.","authors":"Da-Wei Zhou, Qi-Wei Wang, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu","doi":"10.1109/TPAMI.2024.3429383","DOIUrl":"10.1109/TPAMI.2024.3429383","url":null,"abstract":"<p><p>Deep models, e.g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world. However, novel classes emerge from time to time in our ever-changing world, requiring a learning system to acquire new knowledge continually. Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally and build a universal classifier among all seen classes. Correspondingly, when directly training the model with new class instances, a fatal problem occurs - the model tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades. There have been numerous efforts to tackle catastrophic forgetting in the machine learning community. In this paper, we survey comprehensively recent advances in class-incremental learning and summarize these methods from several aspects. We also provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms empirically. Furthermore, we notice that the current comparison protocol ignores the influence of memory budget in model storage, which may result in unfair comparison and biased results. Hence, we advocate fair comparison by aligning the memory budget in evaluation, as well as several memory-agnostic performance measures. The source code is available at https://github.com/zhoudw-zdw/CIL_Survey/.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Many-to-Many Mapping for Unpaired Real-World Image Super-resolution and Downscaling. 学习多对多映射,实现无配对真实世界图像的超分辨率和降维。
Pub Date : 2024-07-16 DOI: 10.1109/TPAMI.2024.3428546
Wanjie Sun, Zhenzhong Chen

Learning based single image super-resolution (SISR) for real-world images has been an active research topic yet a challenging task, due to the lack of paired low-resolution (LR) and high-resolution (HR) training images. Most of the existing unsupervised real-world SISR methods adopt a twostage training strategy by synthesizing realistic LR images from their HR counterparts first, then training the super-resolution (SR) models in a supervised manner. However, the training of image degradation and SR models in this strategy are separate, ignoring the inherent mutual dependency between downscaling and its inverse upscaling process. Additionally, the ill-posed nature of image degradation is not fully considered. In this paper, we propose an image downscaling and SR model dubbed as SDFlow, which simultaneously learns a bidirectional manyto- many mapping between real-world LR and HR images unsupervisedly. The main idea of SDFlow is to decouple image content and degradation information in the latent space, where content information distribution of LR and HR images is matched in a common latent space. Degradation information of the LR images and the high-frequency information of the HR images are fitted to an easy-to-sample conditional distribution. Experimental results on real-world image SR datasets indicate that SDFlow can generate diverse realistic LR and SR images both quantitatively and qualitatively.

由于缺乏成对的低分辨率(LR)和高分辨率(HR)训练图像,基于学习的真实世界图像单图像超分辨率(SISR)一直是一个活跃的研究课题,但也是一项具有挑战性的任务。大多数现有的无监督真实世界 SISR 方法都采用了两阶段训练策略,即首先从对应的高分辨率图像中合成真实的低分辨率图像,然后以监督方式训练超分辨率(SR)模型。然而,在这种策略中,图像降级和 SR 模型的训练是分开的,忽略了降级和反向升维过程之间固有的相互依赖性。此外,也没有充分考虑到图像降解的不确定性。在本文中,我们提出了一种被称为 SDFlow 的图像降尺度和升尺度模型,该模型可以在无监督的情况下同时学习真实世界 LR 和 HR 图像之间的双向多对多映射。SDFlow 的主要思想是在潜空间中解耦图像内容和降级信息,即在一个共同的潜空间中匹配 LR 和 HR 图像的内容信息分布。LR 图像的降解信息和 HR 图像的高频信息被拟合到一个易于采样的条件分布中。在真实世界图像 SR 数据集上的实验结果表明,SDFlow 可以定量和定性地生成多种逼真的 LR 和 SR 图像。
{"title":"Learning Many-to-Many Mapping for Unpaired Real-World Image Super-resolution and Downscaling.","authors":"Wanjie Sun, Zhenzhong Chen","doi":"10.1109/TPAMI.2024.3428546","DOIUrl":"10.1109/TPAMI.2024.3428546","url":null,"abstract":"<p><p>Learning based single image super-resolution (SISR) for real-world images has been an active research topic yet a challenging task, due to the lack of paired low-resolution (LR) and high-resolution (HR) training images. Most of the existing unsupervised real-world SISR methods adopt a twostage training strategy by synthesizing realistic LR images from their HR counterparts first, then training the super-resolution (SR) models in a supervised manner. However, the training of image degradation and SR models in this strategy are separate, ignoring the inherent mutual dependency between downscaling and its inverse upscaling process. Additionally, the ill-posed nature of image degradation is not fully considered. In this paper, we propose an image downscaling and SR model dubbed as SDFlow, which simultaneously learns a bidirectional manyto- many mapping between real-world LR and HR images unsupervisedly. The main idea of SDFlow is to decouple image content and degradation information in the latent space, where content information distribution of LR and HR images is matched in a common latent space. Degradation information of the LR images and the high-frequency information of the HR images are fitted to an easy-to-sample conditional distribution. Experimental results on real-world image SR datasets indicate that SDFlow can generate diverse realistic LR and SR images both quantitatively and qualitatively.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Markov Progressive Framework, a Universal Paradigm for Modeling Long Videos. 马尔可夫渐进框架:长视频建模的通用范例
Pub Date : 2024-07-12 DOI: 10.1109/TPAMI.2024.3426998
Bo Pang, Gao Peng, Yizhuo Li, Cewu Lu

Compared to images, video, as an increasingly mainstream visual media, contains more semantic information. For this reason, the computational complexity of video models is an order of magnitude larger than their image-level counterparts, which increases linearly with the square number of frames. Constrained by computational resources, training video models to learn long-term temporal semantics end-to-end is quite a challenge. Currently, the main-stream method is to split a raw video into clips, leading to incomplete fragmentary temporal information flow and failure of modeling long-term semantics. To solve this problem, in this paper, we design the Markov Progressive framework (MaPro), a theoretical framework consisting of the progressive modeling method and a paradigm model tailored for it. Inspired by natural language processing techniques dealing with long sentences, the core idea of MaPro is to find a paradigm model consisting of proposed Markov operators which can be trained in multiple sequential steps and ensure that the multi-step progressive modeling is equivalent to the conventional end-to-end modeling. By training the paradigm model under the progressive method, we are able to model long videos end-to-end with limited resources and ensure the effective transmission of long-term temporal information. We provide detailed implementations of this theoretical system on the mainstream CNN- and Transformer-based models, where they are modified to conform to the Markov paradigm. The theoretical paradigm as a basic model is the lower bound of the model efficiency. With it, we further explore more sophisticated designs for CNN and Transformer-based methods specifically. As a general and robust training method, we experimentally demonstrate that it yields significant performance improvements on different backbones and datasets. As an illustrative example, the proposed method improves the SlowOnly network by 4.1 mAP on Charades and 2.5 top-1 accuracy on Kinetics. And for TimeSformer, MaPro improves its performance on Kinetics by 2.0 top-1 accuracy. Importantly, all these improvements are achieved with a little parameter and computation overhead. We hope the MaPro method can provide the community with new insight into modeling long videos.

与图像相比,作为日益主流的视觉媒体,视频包含更多的语义信息。因此,视频模型的计算复杂度要比图像级模型大一个数量级,后者的计算复杂度随帧数的平方呈线性增长。受计算资源的限制,训练视频模型以端到端学习长期时间语义是一项相当大的挑战。目前,主流的方法是将原始视频分割成片段,这会导致不完整的碎片化时间信息流,无法建立长期语义模型。为了解决这个问题,我们在本文中设计了马尔可夫渐进框架(MaPro),这是一个由渐进建模方法和为其量身定制的范式模型组成的理论框架。受处理长句的自然语言处理技术的启发,MaPro 的核心思想是找到一个由拟议的马尔可夫算子组成的范式模型,该范式模型可通过多个连续步骤进行训练,并确保多步骤渐进建模等同于传统的端到端建模。通过在渐进方法下训练范式模型,我们能够利用有限的资源对长视频进行端到端建模,并确保长期时间信息的有效传输。我们在基于 CNN 和 Transformer 的主流模型上详细实现了这一理论体系,并对其进行了修改,使其符合马尔可夫范式。作为基本模型的理论范式是模型效率的下限。在此基础上,我们将进一步探索基于 CNN 和 Transformer 方法的更复杂设计。作为一种通用且稳健的训练方法,我们通过实验证明,它能在不同的骨干网和数据集上显著提高性能。举例来说,所提出的方法可使 SlowOnly 网络在猜字谜中提高 4.1 mAP,在 Kinetics 中提高 2.5 top-1 准确率。而对于 TimeSformer,MaPro 在 Kinetics 上的性能提高了 2.0 top-1 精度。重要的是,所有这些改进都是在参数和计算开销很小的情况下实现的。我们希望 MaPro 方法能为社区提供对长视频建模的新见解。
{"title":"Markov Progressive Framework, a Universal Paradigm for Modeling Long Videos.","authors":"Bo Pang, Gao Peng, Yizhuo Li, Cewu Lu","doi":"10.1109/TPAMI.2024.3426998","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3426998","url":null,"abstract":"<p><p>Compared to images, video, as an increasingly mainstream visual media, contains more semantic information. For this reason, the computational complexity of video models is an order of magnitude larger than their image-level counterparts, which increases linearly with the square number of frames. Constrained by computational resources, training video models to learn long-term temporal semantics end-to-end is quite a challenge. Currently, the main-stream method is to split a raw video into clips, leading to incomplete fragmentary temporal information flow and failure of modeling long-term semantics. To solve this problem, in this paper, we design the Markov Progressive framework (MaPro), a theoretical framework consisting of the progressive modeling method and a paradigm model tailored for it. Inspired by natural language processing techniques dealing with long sentences, the core idea of MaPro is to find a paradigm model consisting of proposed Markov operators which can be trained in multiple sequential steps and ensure that the multi-step progressive modeling is equivalent to the conventional end-to-end modeling. By training the paradigm model under the progressive method, we are able to model long videos end-to-end with limited resources and ensure the effective transmission of long-term temporal information. We provide detailed implementations of this theoretical system on the mainstream CNN- and Transformer-based models, where they are modified to conform to the Markov paradigm. The theoretical paradigm as a basic model is the lower bound of the model efficiency. With it, we further explore more sophisticated designs for CNN and Transformer-based methods specifically. As a general and robust training method, we experimentally demonstrate that it yields significant performance improvements on different backbones and datasets. As an illustrative example, the proposed method improves the SlowOnly network by 4.1 mAP on Charades and 2.5 top-1 accuracy on Kinetics. And for TimeSformer, MaPro improves its performance on Kinetics by 2.0 top-1 accuracy. Importantly, all these improvements are achieved with a little parameter and computation overhead. We hope the MaPro method can provide the community with new insight into modeling long videos.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1