首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
GeoDTR+: Toward Generic Cross-View Geolocalization via Geometric Disentanglement. GeoDTR+:通过几何解缠实现通用跨视图地理定位
Pub Date : 2024-08-14 DOI: 10.1109/TPAMI.2024.3443652
Xiaohan Zhang, Xingyu Li, Waqas Sultani, Chen Chen, Safwan Wshah

Cross-View Geo-Localization (CVGL) estimates the location of a ground image by matching it to a geo-tagged aerial image in a database. Recent works achieve outstanding progress on CVGL benchmarks. However, existing methods still suffer from poor performance in cross-area evaluation, in which the training and testing data are captured from completely distinct areas. We attribute this deficiency to the lack of ability to extract the geometric layout of visual features and models' overfitting to low-level details. Our preliminary work [1] introduced a Geometric Layout Extractor (GLE) to capture the geometric layout from input features. However, the previous GLE does not fully exploit information in the input feature. In this work, we propose GeoDTR+ with an enhanced GLE module that better models the correlations among visual features. To fully explore the LS techniques from our preliminary work, we further propose Contrastive Hard Samples Generation (CHSG) to facilitate model training. Extensive experiments show that GeoDTR+ achieves state-of-the-art (SOTA) results in cross-area evaluation on CVUSA [2], CVACT [3], and VIGOR [4] by a large margin ( 16.44%, 22.71%, and 13.66% without polar transformation) while keeping the same-area performance comparable to existing SOTA. Moreover, we provide detailed analyses of GeoDTR+. Our code will be available at https://gitlab.com/vail-uvm/geodtr_plus.

跨视图地理定位(CVGL)通过将地面图像与数据库中带有地理标记的航空图像进行匹配,来估算地面图像的位置。最近的工作在 CVGL 基准方面取得了突出进展。然而,现有方法在跨区域评估中仍然表现不佳,在跨区域评估中,训练数据和测试数据来自完全不同的区域。我们将这一缺陷归咎于缺乏提取视觉特征几何布局的能力以及模型对低级细节的过度拟合。我们的前期工作[1]引入了几何布局提取器(GLE),从输入特征中捕捉几何布局。然而,之前的 GLE 并没有充分利用输入特征中的信息。在这项工作中,我们提出了带有增强型 GLE 模块的 GeoDTR+,该模块能更好地模拟视觉特征之间的相关性。为了充分挖掘前期工作中的 LS 技术,我们进一步提出了对比硬样本生成(CHSG)技术,以促进模型训练。广泛的实验表明,GeoDTR+ 在 CVUSA [2]、CVACT [3] 和 VIGOR [4] 的跨区域评估中以较大的优势(16.44%、22.71% 和 13.66%,无极性变换)取得了最先进的(SOTA)结果,同时保持了与现有 SOTA 相当的同区域性能。此外,我们还对 GeoDTR+ 进行了详细分析。我们的代码将发布在 https://gitlab.com/vail-uvm/geodtr_plus 网站上。
{"title":"GeoDTR+: Toward Generic Cross-View Geolocalization via Geometric Disentanglement.","authors":"Xiaohan Zhang, Xingyu Li, Waqas Sultani, Chen Chen, Safwan Wshah","doi":"10.1109/TPAMI.2024.3443652","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3443652","url":null,"abstract":"<p><p>Cross-View Geo-Localization (CVGL) estimates the location of a ground image by matching it to a geo-tagged aerial image in a database. Recent works achieve outstanding progress on CVGL benchmarks. However, existing methods still suffer from poor performance in cross-area evaluation, in which the training and testing data are captured from completely distinct areas. We attribute this deficiency to the lack of ability to extract the geometric layout of visual features and models' overfitting to low-level details. Our preliminary work [1] introduced a Geometric Layout Extractor (GLE) to capture the geometric layout from input features. However, the previous GLE does not fully exploit information in the input feature. In this work, we propose GeoDTR+ with an enhanced GLE module that better models the correlations among visual features. To fully explore the LS techniques from our preliminary work, we further propose Contrastive Hard Samples Generation (CHSG) to facilitate model training. Extensive experiments show that GeoDTR+ achieves state-of-the-art (SOTA) results in cross-area evaluation on CVUSA [2], CVACT [3], and VIGOR [4] by a large margin ( 16.44%, 22.71%, and 13.66% without polar transformation) while keeping the same-area performance comparable to existing SOTA. Moreover, we provide detailed analyses of GeoDTR+. Our code will be available at https://gitlab.com/vail-uvm/geodtr_plus.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Multi-Convolution and Attention Pooling for Graph Classification. 用于图分类的图多重卷积和注意力池。
Pub Date : 2024-08-14 DOI: 10.1109/TPAMI.2024.3443253
Yuhua Xu, Junli Wang, Mingjian Guang, Changjun Jiang

Many studies have achieved excellent performance in analyzing graph-structured data. However, learning graph-level representations for graph classification is still a challenging task. Existing graph classification methods usually pay less attention to the fusion of node features and ignore the effects of different-hop neighborhoods on nodes in the graph convolution process. Moreover, they discard some nodes directly during the graph pooling process, resulting in the loss of graph information. To tackle these issues, we propose a new Graph Multi-Convolution and Attention Pooling based graph classification method (GMCAP). Specifically, the designed Graph Multi-Convolution (GMConv) layer explicitly fuses node features learned from different perspectives. The proposed weight-based aggregation module combines the outputs of all GMConv layers, for adaptively exploiting the information over different-hop neighborhoods to generate informative node representations. Furthermore, the designed Local information and Global Attention based Pooling (LGAPool) utilizes the local information of a graph to select several important nodes and aggregates the information of unselected nodes to the selected ones by a global attention mechanism when reconstructing a pooled graph, thus effectively reducing the loss of graph information. Extensive experiments show that GMCAP outperforms the state-of-the-art methods on graph classification tasks, demonstrating that GMCAP can learn graph-level representations effectively.

许多研究在分析图结构数据方面取得了优异的成绩。然而,学习用于图分类的图级表示仍然是一项具有挑战性的任务。现有的图分类方法通常不太重视节点特征的融合,在图卷积过程中忽略了不同跳邻域对节点的影响。此外,它们在图池化过程中直接丢弃了一些节点,导致图信息丢失。针对这些问题,我们提出了一种新的基于图多重卷积和注意力池的图分类方法(GMCAP)。具体来说,所设计的图多重卷积(GMConv)层明确融合了从不同角度获得的节点特征。所提出的基于权重的聚合模块结合了所有 GMConv 层的输出,以便自适应地利用不同跳邻域的信息来生成信息丰富的节点表示。此外,所设计的基于局部信息和全局注意力的汇集(LGAPool)利用图的局部信息选择几个重要节点,并在重建汇集图时通过全局注意力机制将未选择节点的信息汇集到所选节点上,从而有效减少图信息的损失。大量实验表明,GMCAP 在图分类任务上的表现优于最先进的方法,证明 GMCAP 可以有效地学习图级表示。
{"title":"Graph Multi-Convolution and Attention Pooling for Graph Classification.","authors":"Yuhua Xu, Junli Wang, Mingjian Guang, Changjun Jiang","doi":"10.1109/TPAMI.2024.3443253","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3443253","url":null,"abstract":"<p><p>Many studies have achieved excellent performance in analyzing graph-structured data. However, learning graph-level representations for graph classification is still a challenging task. Existing graph classification methods usually pay less attention to the fusion of node features and ignore the effects of different-hop neighborhoods on nodes in the graph convolution process. Moreover, they discard some nodes directly during the graph pooling process, resulting in the loss of graph information. To tackle these issues, we propose a new Graph Multi-Convolution and Attention Pooling based graph classification method (GMCAP). Specifically, the designed Graph Multi-Convolution (GMConv) layer explicitly fuses node features learned from different perspectives. The proposed weight-based aggregation module combines the outputs of all GMConv layers, for adaptively exploiting the information over different-hop neighborhoods to generate informative node representations. Furthermore, the designed Local information and Global Attention based Pooling (LGAPool) utilizes the local information of a graph to select several important nodes and aggregates the information of unselected nodes to the selected ones by a global attention mechanism when reconstructing a pooled graph, thus effectively reducing the loss of graph information. Extensive experiments show that GMCAP outperforms the state-of-the-art methods on graph classification tasks, demonstrating that GMCAP can learn graph-level representations effectively.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Brain Network Construction Paradigm for Brain Disorder Via Diffusion-Based Graph Contrastive Learning. 通过基于扩散的图谱对比学习构建治疗脑部疾病的新型脑网络范例
Pub Date : 2024-08-13 DOI: 10.1109/TPAMI.2024.3442811
Yongcheng Zong, Qiankun Zuo, Michael Kwok-Po Ng, Baiying Lei, Shuqiang Wang

Brain network analysis plays an increasingly important role in studying brain function and the exploring of disease mechanisms. However, existing brain network construction tools have some limitations, including dependency on empirical users, weak consistency in repeated experiments and time-consuming processes. In this work, a diffusion-based brain network pipeline, DGCL is designed for end-to-end construction of brain networks. Initially, the brain region-aware module (BRAM) precisely determines the spatial locations of brain regions by the diffusion process, avoiding subjective parameter selection. Subsequently, DGCL employs graph contrastive learning to optimize brain connections by eliminating individual differences in redundant connections unrelated to diseases, thereby enhancing the consistency of brain networks within the same group. Finally, the node-graph contrastive loss and classification loss jointly constrain the learning process of the model to obtain the reconstructed brain network, which is then used to analyze important brain connections. Validation on two datasets, ADNI and ABIDE, demonstrates that DGCL surpasses traditional methods and other deep learning models in predicting disease development stages. Significantly, the proposed model improves the efficiency and generalization of brain network construction. In summary, the proposed DGCL can be served as a universal brain network construction scheme, which can effectively identify important brain connections through generative paradigms and has the potential to provide disease interpretability support for neuroscience research.

脑网络分析在研究大脑功能和探索疾病机理方面发挥着越来越重要的作用。然而,现有的脑网络构建工具存在一些局限性,包括对经验用户的依赖性、重复实验的一致性较弱以及过程耗时等。在这项工作中,设计了一种基于扩散的脑网络管道 DGCL,用于端到端的脑网络构建。首先,脑区感知模块(BRAM)通过扩散过程精确确定脑区的空间位置,避免了主观参数选择。随后,DGCL 采用图对比学习,通过消除与疾病无关的冗余连接的个体差异来优化脑连接,从而提高同组内脑网络的一致性。最后,节点图对比损失和分类损失共同约束模型的学习过程,得到重建的大脑网络,然后用于分析重要的大脑连接。在ADNI和ABIDE两个数据集上的验证表明,DGCL在预测疾病发展阶段方面超越了传统方法和其他深度学习模型。值得注意的是,所提出的模型提高了大脑网络构建的效率和泛化能力。总之,所提出的 DGCL 可以作为一种通用的脑网络构建方案,通过生成范式有效识别重要的脑连接,有望为神经科学研究提供疾病可解释性支持。
{"title":"A New Brain Network Construction Paradigm for Brain Disorder Via Diffusion-Based Graph Contrastive Learning.","authors":"Yongcheng Zong, Qiankun Zuo, Michael Kwok-Po Ng, Baiying Lei, Shuqiang Wang","doi":"10.1109/TPAMI.2024.3442811","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3442811","url":null,"abstract":"<p><p>Brain network analysis plays an increasingly important role in studying brain function and the exploring of disease mechanisms. However, existing brain network construction tools have some limitations, including dependency on empirical users, weak consistency in repeated experiments and time-consuming processes. In this work, a diffusion-based brain network pipeline, DGCL is designed for end-to-end construction of brain networks. Initially, the brain region-aware module (BRAM) precisely determines the spatial locations of brain regions by the diffusion process, avoiding subjective parameter selection. Subsequently, DGCL employs graph contrastive learning to optimize brain connections by eliminating individual differences in redundant connections unrelated to diseases, thereby enhancing the consistency of brain networks within the same group. Finally, the node-graph contrastive loss and classification loss jointly constrain the learning process of the model to obtain the reconstructed brain network, which is then used to analyze important brain connections. Validation on two datasets, ADNI and ABIDE, demonstrates that DGCL surpasses traditional methods and other deep learning models in predicting disease development stages. Significantly, the proposed model improves the efficiency and generalization of brain network construction. In summary, the proposed DGCL can be served as a universal brain network construction scheme, which can effectively identify important brain connections through generative paradigms and has the potential to provide disease interpretability support for neuroscience research.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Context-Aware Emotion Recognition Debiasing from a Causal Demystification Perspective via De-confounded Training. 从因果解密的角度,通过去混淆训练实现情境感知的情绪识别去混淆。
Pub Date : 2024-08-13 DOI: 10.1109/TPAMI.2024.3443129
Dingkang Yang, Kun Yang, Haopeng Kuang, Zhaoyu Chen, Yuzheng Wang, Lihua Zhang

Understanding emotions from diverse contexts has received widespread attention in computer vision communities. The core philosophy of Context-Aware Emotion Recognition (CAER) is to provide valuable semantic cues for recognizing the emotions of target persons by leveraging rich contextual information. Current approaches invariably focus on designing sophisticated structures to extract perceptually critical representations from contexts. Nevertheless, a long-neglected dilemma is that a severe context bias in existing datasets results in an unbalanced distribution of emotional states among different contexts, causing biased visual representation learning. From a causal demystification perspective, the harmful bias is identified as a confounder that misleads existing models to learn spurious correlations based on likelihood estimation, limiting the models' performance. To address the issue, we embrace causal inference to disentangle the models from the impact of such bias, and formulate the causalities among variables in the CAER task via a customized causal graph. Subsequently, we present a Contextual Causal Intervention Module (CCIM) to de-confound the confounder, which is built upon backdoor adjustment theory to facilitate seeking approximate causal effects during model training. As a plug-and-play component, CCIM can easily integrate with existing approaches and bring significant improvements. Systematic experiments on three datasets demonstrate the effectiveness of our CCIM.

从不同情境中理解情绪已受到计算机视觉领域的广泛关注。情境感知情绪识别(Context-Aware Emotion Recognition,CAER)的核心理念是利用丰富的情境信息为识别目标人物的情绪提供有价值的语义线索。当前的方法无一例外地侧重于设计复杂的结构,以便从上下文中提取关键的感知表征。然而,一个长期被忽视的难题是,现有数据集中存在严重的语境偏差,导致情绪状态在不同语境中的分布不平衡,从而造成视觉表征学习的偏差。从因果解密的角度来看,这种有害的偏差被认为是一种混杂因素,会误导现有模型根据似然估计学习虚假的相关性,从而限制模型的性能。为了解决这个问题,我们采用了因果推理方法,将模型与这种偏差的影响区分开来,并通过定制的因果图来表述 CAER 任务中变量之间的因果关系。随后,我们提出了语境因果干预模块(CCIM)来消除混杂因素,该模块建立在后门调整理论基础上,便于在模型训练过程中寻求近似因果效应。作为一个即插即用的组件,CCIM 可以很容易地与现有方法集成,并带来显著的改进。在三个数据集上进行的系统实验证明了我们的 CCIM 的有效性。
{"title":"Towards Context-Aware Emotion Recognition Debiasing from a Causal Demystification Perspective via De-confounded Training.","authors":"Dingkang Yang, Kun Yang, Haopeng Kuang, Zhaoyu Chen, Yuzheng Wang, Lihua Zhang","doi":"10.1109/TPAMI.2024.3443129","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3443129","url":null,"abstract":"<p><p>Understanding emotions from diverse contexts has received widespread attention in computer vision communities. The core philosophy of Context-Aware Emotion Recognition (CAER) is to provide valuable semantic cues for recognizing the emotions of target persons by leveraging rich contextual information. Current approaches invariably focus on designing sophisticated structures to extract perceptually critical representations from contexts. Nevertheless, a long-neglected dilemma is that a severe context bias in existing datasets results in an unbalanced distribution of emotional states among different contexts, causing biased visual representation learning. From a causal demystification perspective, the harmful bias is identified as a confounder that misleads existing models to learn spurious correlations based on likelihood estimation, limiting the models' performance. To address the issue, we embrace causal inference to disentangle the models from the impact of such bias, and formulate the causalities among variables in the CAER task via a customized causal graph. Subsequently, we present a Contextual Causal Intervention Module (CCIM) to de-confound the confounder, which is built upon backdoor adjustment theory to facilitate seeking approximate causal effects during model training. As a plug-and-play component, CCIM can easily integrate with existing approaches and bring significant improvements. Systematic experiments on three datasets demonstrate the effectiveness of our CCIM.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAC: Maximal Cliques for 3D Registration. MAC:用于 3D 注册的最大聚类
Pub Date : 2024-08-13 DOI: 10.1109/TPAMI.2024.3442911
Jiaqi Yang, Xiyu Zhang, Peng Wang, Yulan Guo, Kun Sun, Qiao Wu, Shikun Zhang, Yanning Zhang

This paper presents a 3D registration method with maximal cliques (MAC) for 3D point cloud registration (PCR). The key insight is to loosen the previous maximum clique constraint and mine more local consensus information in a graph for accurate pose hypotheses generation: 1) A compatibility graph is constructed to render the affinity relationship between initial correspondences. 2) We search for maximal cliques in the graph, each representing a consensus set. 3) Transformation hypotheses are computed for the selected cliques by the SVD algorithm and the best hypothesis is used to perform registration. In addition, we present a variant of MAC if given overlap prior, called MAC-OP. Overlap prior further enhances MAC from many technical aspects, such as graph construction with re-weighted nodes, hypotheses generation from cliques with additional constraints, and hypothesis evaluation with overlap-aware weights. Extensive experiments demonstrate that both MAC and MAC-OP effectively increase registration recall, outperform various state-of-the-art methods, and boost the performance of deep-learned methods. For instance, MAC combined with GeoTransformer achieves a state-of-the-art registration recall of 95.7% / 78.9% on 3DMatch / 3DLoMatch. We perform synthetic experiments on 3DMatch-LIR / 3DLoMatch-LIR, a dataset with extremely low inlier ratios for 3D registration in ultra-challenging cases. Code will be available at: https://github.com/zhangxy0517/3D-Registration-with-Maximal-Cliques.

本文针对三维点云注册(PCR)提出了一种带最大克利群(MAC)的三维注册方法。该方法的关键在于放宽之前的最大簇限制,并在图中挖掘更多局部共识信息,以生成准确的姿态假设:1) 构建兼容性图以呈现初始对应关系之间的亲和力关系。2) 我们在图中搜索最大聚类,每个聚类代表一个共识集。3) 通过 SVD 算法为选定的小群计算变换假设,并使用最佳假设执行配准。此外,我们还提出了一种给定重叠先验的 MAC 变体,称为 MAC-OP。重叠先验从许多技术方面进一步增强了 MAC,例如用重新加权的节点构建图,用附加约束从小块生成假设,以及用重叠感知权重进行假设评估。大量实验证明,MAC 和 MAC-OP 都能有效提高注册召回率,超越各种最先进的方法,并提升深度学习方法的性能。例如,MAC 与 GeoTransformer 的结合在 3DMatch / 3DLoMatch 上实现了 95.7% / 78.9% 的一流注册召回率。我们在 3DMatch-LIR / 3DLoMatch-LIR 数据集上进行了合成实验,该数据集具有极低的离群比,可用于超挑战情况下的三维注册。代码见:https://github.com/zhangxy0517/3D-Registration-with-Maximal-Cliques。
{"title":"MAC: Maximal Cliques for 3D Registration.","authors":"Jiaqi Yang, Xiyu Zhang, Peng Wang, Yulan Guo, Kun Sun, Qiao Wu, Shikun Zhang, Yanning Zhang","doi":"10.1109/TPAMI.2024.3442911","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3442911","url":null,"abstract":"<p><p>This paper presents a 3D registration method with maximal cliques (MAC) for 3D point cloud registration (PCR). The key insight is to loosen the previous maximum clique constraint and mine more local consensus information in a graph for accurate pose hypotheses generation: 1) A compatibility graph is constructed to render the affinity relationship between initial correspondences. 2) We search for maximal cliques in the graph, each representing a consensus set. 3) Transformation hypotheses are computed for the selected cliques by the SVD algorithm and the best hypothesis is used to perform registration. In addition, we present a variant of MAC if given overlap prior, called MAC-OP. Overlap prior further enhances MAC from many technical aspects, such as graph construction with re-weighted nodes, hypotheses generation from cliques with additional constraints, and hypothesis evaluation with overlap-aware weights. Extensive experiments demonstrate that both MAC and MAC-OP effectively increase registration recall, outperform various state-of-the-art methods, and boost the performance of deep-learned methods. For instance, MAC combined with GeoTransformer achieves a state-of-the-art registration recall of 95.7% / 78.9% on 3DMatch / 3DLoMatch. We perform synthetic experiments on 3DMatch-LIR / 3DLoMatch-LIR, a dataset with extremely low inlier ratios for 3D registration in ultra-challenging cases. Code will be available at: https://github.com/zhangxy0517/3D-Registration-with-Maximal-Cliques.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pair then Relation: Pair-Net for Panoptic Scene Graph Generation. 成对关系:用于全景图生成的 Pair-Net
Pub Date : 2024-08-13 DOI: 10.1109/TPAMI.2024.3442301
Jinghao Wang, Zhengyu Wen, Xiangtai Li, Zujin Guo, Jingkang Yang, Ziwei Liu

Panoptic Scene Graph (PSG) is a challenging task in Scene Graph Generation (SGG) that aims to create a more comprehensive scene graph representation using panoptic segmentation instead of boxes. Compared to SGG, PSG has several challenging problems: pixel-level segment outputs and full relationship exploration (It also considers thing and stuff relation). Thus, current PSG methods have limited performance, which hinders downstream tasks or applications. This work aims to design a novel and strong baseline for PSG. To achieve that, we first conduct an in-depth analysis to identify the bottleneck of the current PSG models, finding that inter-object pair-wise recall is a crucial factor that was ignored by previous PSG methods. Based on this and the recent query-based frameworks, we present a novel framework: Pair then Relation (Pair-Net), which uses a Pair Proposal Network (PPN) to learn and filter sparse pair-wise relationships between subjects and objects. Moreover, we also observed the sparse nature of object pairs for both. Motivated by this, we design a lightweight Matrix Learner within the PPN, which directly learns pair-wised relationships for pair proposal generation. Through extensive ablation and analysis, our approach significantly improves upon leveraging the segmenter solid baseline. Notably, our method achieves over 10% absolute gains compared to our baseline, PSGFormer. The code of this paper is publicly available at https://github.com/king159/Pair-Net.

全景场景图(Panoptic Scene Graph,PSG)是场景图生成(Scene Graph Generation,SGG)中一项具有挑战性的任务,旨在使用全景分割而不是方框来创建更全面的场景图表示。与 SGG 相比,PSG 有几个具有挑战性的问题:像素级分割输出和全面关系探索(它还考虑了事物和物品的关系)。因此,目前的 PSG 方法性能有限,阻碍了下游任务或应用。本研究旨在为 PSG 设计一个新颖、强大的基线。为此,我们首先进行了深入分析,以找出当前 PSG 模型的瓶颈,发现对象间的配对召回是一个关键因素,而这一因素被之前的 PSG 方法所忽视。在此基础上,结合最近基于查询的框架,我们提出了一个新颖的框架:该框架使用配对建议网络(PPN)来学习和过滤主体与客体之间的稀疏配对关系。此外,我们还观察到了主体和客体配对关系的稀疏性。受此启发,我们在 PPN 中设计了一个轻量级矩阵学习器,可直接学习配对关系以生成配对建议。通过广泛的消融和分析,我们的方法在利用分割器固态基线的基础上有了显著改进。值得注意的是,与基线 PSGFormer 相比,我们的方法取得了超过 10% 的绝对收益。本文的代码可在 https://github.com/king159/Pair-Net 公开获取。
{"title":"Pair then Relation: Pair-Net for Panoptic Scene Graph Generation.","authors":"Jinghao Wang, Zhengyu Wen, Xiangtai Li, Zujin Guo, Jingkang Yang, Ziwei Liu","doi":"10.1109/TPAMI.2024.3442301","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3442301","url":null,"abstract":"<p><p>Panoptic Scene Graph (PSG) is a challenging task in Scene Graph Generation (SGG) that aims to create a more comprehensive scene graph representation using panoptic segmentation instead of boxes. Compared to SGG, PSG has several challenging problems: pixel-level segment outputs and full relationship exploration (It also considers thing and stuff relation). Thus, current PSG methods have limited performance, which hinders downstream tasks or applications. This work aims to design a novel and strong baseline for PSG. To achieve that, we first conduct an in-depth analysis to identify the bottleneck of the current PSG models, finding that inter-object pair-wise recall is a crucial factor that was ignored by previous PSG methods. Based on this and the recent query-based frameworks, we present a novel framework: Pair then Relation (Pair-Net), which uses a Pair Proposal Network (PPN) to learn and filter sparse pair-wise relationships between subjects and objects. Moreover, we also observed the sparse nature of object pairs for both. Motivated by this, we design a lightweight Matrix Learner within the PPN, which directly learns pair-wised relationships for pair proposal generation. Through extensive ablation and analysis, our approach significantly improves upon leveraging the segmenter solid baseline. Notably, our method achieves over 10% absolute gains compared to our baseline, PSGFormer. The code of this paper is publicly available at https://github.com/king159/Pair-Net.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Pixel Raindrop Removal. 双像素雨滴移除。
Pub Date : 2024-08-13 DOI: 10.1109/TPAMI.2024.3442955
Yizhou Li, Yusuke Monno, Masatoshi Okutomi

Removing raindrops in images has been addressed as a significant task for various computer vision applications. In this paper, we propose the first method using a dual-pixel (DP) sensor to better address raindrop removal. Our key observation is that raindrops attached to a glass window yield noticeable disparities in DP's left-half and right-half images, while almost no disparity exists for in-focus backgrounds. Therefore, the DP disparities can be utilized for robust raindrop detection. The DP disparities also bring the advantage that the occluded background regions by raindrops are slightly shifted between the left-half and the right-half images. Therefore, fusing the information from the left-half and the right-half images can lead to more accurate background texture recovery. Based on the above motivation, we propose a DP Raindrop Removal Network (DPRRN) consisting of DP raindrop detection and DP fused raindrop removal. To efficiently generate a large amount of training data, we also propose a novel pipeline to add synthetic raindrops to real-world background DP images. Experimental results on constructed synthetic and real-world datasets demonstrate that our DPRRN outperforms existing state-of-the-art methods, especially showing better robustness to real-world situations. Our source codes and datasets will be available at http://www.ok.sc.e.titech.ac.jp/res/SIR/dprrn/dprrn.html.

去除图像中的雨滴是各种计算机视觉应用中的一项重要任务。在本文中,我们首次提出了使用双像素(DP)传感器来更好地处理雨滴去除问题的方法。我们的主要观察结果是,附着在玻璃窗上的雨滴会在 DP 的左半边和右半边图像中产生明显的差异,而对焦背景几乎不存在差异。因此,DP 差异可用于雨滴的稳健检测。DP 差异的另一个优势是,雨滴遮挡的背景区域在左半边和右半边图像之间会有轻微偏移。因此,融合左半边和右半边图像的信息可以更准确地恢复背景纹理。基于上述动机,我们提出了由 DP 雨滴检测和 DP 融合雨滴去除组成的 DP 雨滴去除网络(DPRRN)。为了有效地生成大量训练数据,我们还提出了一个新颖的管道,将合成雨滴添加到真实世界的背景 DP 图像中。在构建的合成和真实世界数据集上的实验结果表明,我们的 DPRRN 优于现有的最先进方法,特别是在真实世界的情况下表现出更好的鲁棒性。我们的源代码和数据集将发布在 http://www.ok.sc.e.titech.ac.jp/res/SIR/dprrn/dprrn.html 网站上。
{"title":"Dual-Pixel Raindrop Removal.","authors":"Yizhou Li, Yusuke Monno, Masatoshi Okutomi","doi":"10.1109/TPAMI.2024.3442955","DOIUrl":"10.1109/TPAMI.2024.3442955","url":null,"abstract":"<p><p>Removing raindrops in images has been addressed as a significant task for various computer vision applications. In this paper, we propose the first method using a dual-pixel (DP) sensor to better address raindrop removal. Our key observation is that raindrops attached to a glass window yield noticeable disparities in DP's left-half and right-half images, while almost no disparity exists for in-focus backgrounds. Therefore, the DP disparities can be utilized for robust raindrop detection. The DP disparities also bring the advantage that the occluded background regions by raindrops are slightly shifted between the left-half and the right-half images. Therefore, fusing the information from the left-half and the right-half images can lead to more accurate background texture recovery. Based on the above motivation, we propose a DP Raindrop Removal Network (DPRRN) consisting of DP raindrop detection and DP fused raindrop removal. To efficiently generate a large amount of training data, we also propose a novel pipeline to add synthetic raindrops to real-world background DP images. Experimental results on constructed synthetic and real-world datasets demonstrate that our DPRRN outperforms existing state-of-the-art methods, especially showing better robustness to real-world situations. Our source codes and datasets will be available at http://www.ok.sc.e.titech.ac.jp/res/SIR/dprrn/dprrn.html.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformation Decoupling Strategy based on Screw Theory for Deterministic Point Cloud Registration with Gravity Prior. 基于螺杆理论的带有重力先验的确定性点云注册的变换解耦策略
Pub Date : 2024-08-13 DOI: 10.1109/TPAMI.2024.3442234
Xinyi Li, Zijian Ma, Yinlong Liu, Walter Zimmer, Hu Cao, Feihu Zhang, Alois Knoll

Point cloud registration is challenging in the presence of heavy outlier correspondences. This paper focuses on addressing the robust correspondence-based registration problem with gravity prior that often arises in practice. The gravity directions are typically obtained by inertial measurement units (IMUs) and can reduce the degree of freedom (DOF) of rotation from 3 to 1. We propose a novel transformation decoupling strategy by leveraging the screw theory. This strategy decomposes the original 4-DOF problem into three sub-problems with 1-DOF, 2-DOF, and 1-DOF, respectively, enhancing computation efficiency. Specifically, the first 1-DOF represents the translation along the rotation axis, and we propose an interval stabbing-based method to solve it. The second 2-DOF represents the pole which is an auxiliary variable in screw theory, and we utilize a branch-and-bound method to solve it. The last 1-DOF represents the rotation angle, and we propose a global voting method for its estimation. The proposed method solves three consensus maximization sub-problems sequentially, leading to efficient and deterministic registration. In particular, it can even handle the correspondence-free registration problem due to its significant robustness. Extensive experiments on both synthetic and real-world datasets demonstrate that our method is more efficient and robust than state-of-the-art methods, even when dealing with outlier rates exceeding 99%.

在存在大量离群点对应关系的情况下,点云注册具有挑战性。本文的重点是解决实践中经常出现的基于重力先验的稳健对应配准问题。重力方向通常由惯性测量单元(IMUs)获得,可将旋转自由度(DOF)从 3 减少到 1。我们利用螺旋理论提出了一种新颖的变换解耦策略。该策略将原来的 4-DOF 问题分解为三个子问题,分别为 1-DOF、2-DOF 和 1-DOF,从而提高了计算效率。具体来说,第一个 1-DOF 表示沿旋转轴的平移,我们提出了一种基于区间刺击的方法来解决它。第二个 2-DOF 表示螺杆理论中的辅助变量--极点,我们利用分支约束法来求解。最后一个 1-DOF 代表旋转角,我们提出了一种全局投票法来估计旋转角。我们提出的方法依次解决了三个共识最大化子问题,从而实现了高效和确定性的注册。特别是,由于其显著的鲁棒性,它甚至可以处理无对应配准问题。在合成数据集和真实数据集上进行的大量实验表明,我们的方法比最先进的方法更高效、更稳健,即使在处理离群率超过 99% 的情况下也是如此。
{"title":"Transformation Decoupling Strategy based on Screw Theory for Deterministic Point Cloud Registration with Gravity Prior.","authors":"Xinyi Li, Zijian Ma, Yinlong Liu, Walter Zimmer, Hu Cao, Feihu Zhang, Alois Knoll","doi":"10.1109/TPAMI.2024.3442234","DOIUrl":"10.1109/TPAMI.2024.3442234","url":null,"abstract":"<p><p>Point cloud registration is challenging in the presence of heavy outlier correspondences. This paper focuses on addressing the robust correspondence-based registration problem with gravity prior that often arises in practice. The gravity directions are typically obtained by inertial measurement units (IMUs) and can reduce the degree of freedom (DOF) of rotation from 3 to 1. We propose a novel transformation decoupling strategy by leveraging the screw theory. This strategy decomposes the original 4-DOF problem into three sub-problems with 1-DOF, 2-DOF, and 1-DOF, respectively, enhancing computation efficiency. Specifically, the first 1-DOF represents the translation along the rotation axis, and we propose an interval stabbing-based method to solve it. The second 2-DOF represents the pole which is an auxiliary variable in screw theory, and we utilize a branch-and-bound method to solve it. The last 1-DOF represents the rotation angle, and we propose a global voting method for its estimation. The proposed method solves three consensus maximization sub-problems sequentially, leading to efficient and deterministic registration. In particular, it can even handle the correspondence-free registration problem due to its significant robustness. Extensive experiments on both synthetic and real-world datasets demonstrate that our method is more efficient and robust than state-of-the-art methods, even when dealing with outlier rates exceeding 99%.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
360 Layout Estimation via Orthogonal Planes Disentanglement and Multi-view Geometric Consistency Perception. 通过正交平面解缠和多视角几何一致性感知进行 360 布局估计。
Pub Date : 2024-08-13 DOI: 10.1109/TPAMI.2024.3442481
Zhijie Shen, Chunyu Lin, Junsong Zhang, Lang Nie, Kang Liao, Yao Zhao

Existing panoramic layout estimation solutions tend to recover room boundaries from a vertically compressed sequence, yielding imprecise results as the compression process often muddles the semantics between various planes. Besides, these data-driven approaches impose an urgent demand for massive data annotations, which are laborious and time-consuming. For the first problem, we propose an orthogonal plane disentanglement network (termed DOPNet) to distinguish ambiguous semantics. DOPNet consists of three modules that are integrated to deliver distortion-free, semantics-clean, and detail-sharp disentangled representations, which benefit the subsequent layout recovery. For the second problem, we present an unsupervised adaptation technique tailored for horizon-depth and ratio representations. Concretely, we introduce an optimization strategy for decision-level layout analysis and a 1D cost volume construction method for feature-level multi-view aggregation, both of which are designed to fully exploit the geometric consistency across multiple perspectives. The optimizer provides a reliable set of pseudo-labels for network training, while the 1D cost volume enriches each view with comprehensive scene information derived from other perspectives. Extensive experiments demonstrate that our solution outperforms other SoTA models on both monocular layout estimation and multi-view layout estimation tasks.

现有的全景布局估算解决方案倾向于从垂直压缩序列中恢复房间边界,但由于压缩过程通常会混淆不同平面之间的语义,因此结果并不精确。此外,这些数据驱动型方法对海量数据注释提出了迫切要求,既费力又费时。针对第一个问题,我们提出了一种正交平面解缠网络(简称 DOPNet)来区分模棱两可的语义。DOPNet 由三个模块组成,它们集成在一起,提供无失真、语义清晰和细节锐利的解缠表示,有利于后续的布局恢复。针对第二个问题,我们提出了一种针对水平深度和比例表示的无监督适应技术。具体来说,我们介绍了一种用于决策级布局分析的优化策略和一种用于特征级多视角聚合的一维成本体积构建方法,这两种方法都是为了充分利用多视角的几何一致性而设计的。优化器为网络训练提供了一组可靠的伪标签,而一维代价体积则利用从其他视角获得的综合场景信息丰富了每个视角。大量实验证明,在单目布局估计和多视角布局估计任务中,我们的解决方案都优于其他 SoTA 模型。
{"title":"360 Layout Estimation via Orthogonal Planes Disentanglement and Multi-view Geometric Consistency Perception.","authors":"Zhijie Shen, Chunyu Lin, Junsong Zhang, Lang Nie, Kang Liao, Yao Zhao","doi":"10.1109/TPAMI.2024.3442481","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3442481","url":null,"abstract":"<p><p>Existing panoramic layout estimation solutions tend to recover room boundaries from a vertically compressed sequence, yielding imprecise results as the compression process often muddles the semantics between various planes. Besides, these data-driven approaches impose an urgent demand for massive data annotations, which are laborious and time-consuming. For the first problem, we propose an orthogonal plane disentanglement network (termed DOPNet) to distinguish ambiguous semantics. DOPNet consists of three modules that are integrated to deliver distortion-free, semantics-clean, and detail-sharp disentangled representations, which benefit the subsequent layout recovery. For the second problem, we present an unsupervised adaptation technique tailored for horizon-depth and ratio representations. Concretely, we introduce an optimization strategy for decision-level layout analysis and a 1D cost volume construction method for feature-level multi-view aggregation, both of which are designed to fully exploit the geometric consistency across multiple perspectives. The optimizer provides a reliable set of pseudo-labels for network training, while the 1D cost volume enriches each view with comprehensive scene information derived from other perspectives. Extensive experiments demonstrate that our solution outperforms other SoTA models on both monocular layout estimation and multi-view layout estimation tasks.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Multimodal Learning: A Survey. 自我监督多模态学习:调查。
Pub Date : 2024-08-07 DOI: 10.1109/TPAMI.2024.3429301
Yongshuo Zong, Oisin Mac Aodha, Timothy Hospedales

Multimodal learning, which aims to understand and analyze information from multiple modalities, has achieved substantial progress in the supervised regime in recent years. However, the heavy dependence on data paired with expensive human annotations impedes scaling up models. Meanwhile, given the availability of large-scale unannotated data in the wild, self-supervised learning has become an attractive strategy to alleviate the annotation bottleneck. Building on these two directions, self-supervised multimodal learning (SSML) provides ways to learn from raw multimodal data. In this survey, we provide a comprehensive review of the state-of-the-art in SSML, in which we elucidate three major challenges intrinsic to self-supervised learning with multimodal data: (1) learning representations from multimodal data without labels, (2) fusion of different modalities, and (3) learning with unaligned data. We then detail existing solutions to these challenges. Specifically, we consider (1) objectives for learning from multimodal unlabeled data via self-supervision, (2) model architectures from the perspective of different multimodal fusion strategies, and (3) pair-free learning strategies for coarse-grained and fine-grained alignment. We also review real-world applications of SSML algorithms in diverse fields such as healthcare, remote sensing, and machine translation. Finally, we discuss challenges and future directions for SSML. A collection of related resources can be found at: https://github.com/ys-zong/awesome-self-supervised-multimodal-learning.

多模态学习旨在理解和分析来自多种模态的信息,近年来在有监督机制方面取得了重大进展。然而,对数据的严重依赖以及昂贵的人工标注阻碍了模型的扩展。同时,考虑到野生大规模未注释数据的可用性,自监督学习已成为缓解注释瓶颈的一种有吸引力的策略。基于这两个方向,自监督多模态学习(SSML)提供了从原始多模态数据中学习的方法。在本调查报告中,我们全面回顾了 SSML 的最新进展,阐明了多模态数据自监督学习所面临的三大挑战:(1) 从无标签的多模态数据中学习表征;(2) 融合不同模态;(3) 使用未对齐数据进行学习。然后,我们详细介绍了应对这些挑战的现有解决方案。具体来说,我们考虑了:(1) 通过自我监督从无标签多模态数据中学习的目标;(2) 从不同多模态融合策略的角度考虑的模型架构;(3) 粗粒度和细粒度配准的无配对学习策略。我们还回顾了 SSML 算法在医疗保健、遥感和机器翻译等不同领域的实际应用。最后,我们讨论了 SSML 面临的挑战和未来发展方向。相关资源集合请访问:https://github.com/ys-zong/awesome-self-supervised-multimodal-learning。
{"title":"Self-Supervised Multimodal Learning: A Survey.","authors":"Yongshuo Zong, Oisin Mac Aodha, Timothy Hospedales","doi":"10.1109/TPAMI.2024.3429301","DOIUrl":"https://doi.org/10.1109/TPAMI.2024.3429301","url":null,"abstract":"<p><p>Multimodal learning, which aims to understand and analyze information from multiple modalities, has achieved substantial progress in the supervised regime in recent years. However, the heavy dependence on data paired with expensive human annotations impedes scaling up models. Meanwhile, given the availability of large-scale unannotated data in the wild, self-supervised learning has become an attractive strategy to alleviate the annotation bottleneck. Building on these two directions, self-supervised multimodal learning (SSML) provides ways to learn from raw multimodal data. In this survey, we provide a comprehensive review of the state-of-the-art in SSML, in which we elucidate three major challenges intrinsic to self-supervised learning with multimodal data: (1) learning representations from multimodal data without labels, (2) fusion of different modalities, and (3) learning with unaligned data. We then detail existing solutions to these challenges. Specifically, we consider (1) objectives for learning from multimodal unlabeled data via self-supervision, (2) model architectures from the perspective of different multimodal fusion strategies, and (3) pair-free learning strategies for coarse-grained and fine-grained alignment. We also review real-world applications of SSML algorithms in diverse fields such as healthcare, remote sensing, and machine translation. Finally, we discuss challenges and future directions for SSML. A collection of related resources can be found at: https://github.com/ys-zong/awesome-self-supervised-multimodal-learning.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1