首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
Decentralized Federated Learning With Distributed Aggregation Weight Optimization 基于分布式聚合权优化的分散联邦学习
IF 18.6 Pub Date : 2025-12-05 DOI: 10.1109/TPAMI.2025.3640709
Zhiyuan Zhai;Xiaojun Yuan;Xin Wang;Geoffrey Ye Li
Decentralized federated learning (DFL) is an emerging paradigm to enable edge devices collaboratively training a learning model using a device-to-device (D2D) communication manner without the coordination of a parameter server (PS). Aggregation weights, also known as mixing weights, are crucial in DFL process, and impact the learning efficiency and accuracy. Conventional design relies on a so-called central entity to collect all local information and conduct system optimization to obtain appropriate weights. In this paper, we develop a distributed aggregation weight optimization algorithm to align with the decentralized nature of DFL. We analyze convergence by quantitatively capturing the impact of the aggregation weights over decentralized communication networks. Based on the analysis, we then formulate a learning performance optimization problem by designing the aggregation weights to minimize the derived convergence bound. The optimization problem is further transformed as an eigenvalue optimization problem and solved by our proposed subgradient-based algorithm in a distributed fashion. In our algorithm, edge devices only need local information to obtain the optimal aggregation weights through local (D2D) communications, just like the learning itself. Therefore, the optimization, communication, and learning process can be all conducted in a distributed fashion, which leads to a genuinely distributed DFL system. Numerical results demonstrate the superiority of the proposed algorithm in practical DFL deployment.
分散式联邦学习(DFL)是一种新兴的范例,它使边缘设备能够使用设备到设备(D2D)通信方式协同训练学习模型,而无需参数服务器(PS)的协调。聚合权值,又称混合权值,在DFL过程中起着至关重要的作用,影响着学习的效率和准确性。传统的设计依赖于一个所谓的中心实体来收集所有的局部信息,并进行系统优化以获得适当的权重。在本文中,我们开发了一种分布式聚合权优化算法,以适应DFL的分散性。我们通过定量捕获分散通信网络中聚合权值的影响来分析收敛性。在此基础上,我们通过设计聚合权值来最小化推导出的收敛界,从而形成一个学习性能优化问题。将优化问题进一步转化为特征值优化问题,并采用基于子梯度的分布式算法求解。在我们的算法中,边缘设备只需要局部信息就可以通过局部(D2D)通信获得最优的聚合权值,就像学习本身一样。因此,优化、沟通和学习过程都可以以分布式的方式进行,从而形成真正的分布式DFL系统。数值结果表明了该算法在实际DFL部署中的优越性。
{"title":"Decentralized Federated Learning With Distributed Aggregation Weight Optimization","authors":"Zhiyuan Zhai;Xiaojun Yuan;Xin Wang;Geoffrey Ye Li","doi":"10.1109/TPAMI.2025.3640709","DOIUrl":"10.1109/TPAMI.2025.3640709","url":null,"abstract":"Decentralized federated learning (DFL) is an emerging paradigm to enable edge devices collaboratively training a learning model using a device-to-device (D2D) communication manner without the coordination of a parameter server (PS). Aggregation weights, also known as mixing weights, are crucial in DFL process, and impact the learning efficiency and accuracy. Conventional design relies on a so-called central entity to collect all local information and conduct system optimization to obtain appropriate weights. In this paper, we develop a distributed aggregation weight optimization algorithm to align with the decentralized nature of DFL. We analyze convergence by quantitatively capturing the impact of the aggregation weights over decentralized communication networks. Based on the analysis, we then formulate a learning performance optimization problem by designing the aggregation weights to minimize the derived convergence bound. The optimization problem is further transformed as an eigenvalue optimization problem and solved by our proposed subgradient-based algorithm in a distributed fashion. In our algorithm, edge devices only need local information to obtain the optimal aggregation weights through local (D2D) communications, just like the learning itself. Therefore, the optimization, communication, and learning process can be all conducted in a distributed fashion, which leads to a genuinely distributed DFL system. Numerical results demonstrate the superiority of the proposed algorithm in practical DFL deployment.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3899-3910"},"PeriodicalIF":18.6,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing Lightweight Transformer With Contextual Synergic Enhancement for Efficient 3D Medical Image Segmentation 利用轻量级变压器与上下文协同增强有效的3D医学图像分割
IF 18.6 Pub Date : 2025-12-05 DOI: 10.1109/TPAMI.2025.3640233
Xinyu Liu;Zhen Chen;Wuyang Li;Chenxin Li;Yixuan Yuan
Transformers have shown remarkable performance in 3D medical image segmentation, but their high computational requirements and need for large amounts of labeled data limit their applicability. To address these challenges, we consider two crucial aspects: model efficiency and data efficiency. Specifically, we propose Light-UNETR, a lightweight transformer designed to achieve model efficiency. Light-UNETR features a Lightweight Dimension Reductive Attention (LIDR) module, which reduces spatial and channel dimensions while capturing both global and local features via multi-branch attention. Additionally, we introduce a Compact Gated Linear Unit (CGLU) to selectively control channel interaction with minimal parameters. Furthermore, we introduce a Contextual Synergic Enhancement (CSE) learning strategy, which aims to boost the data efficiency of Transformers. It first leverages the extrinsic contextual information to support the learning of unlabeled data with Attention-Guided Replacement, then applies Spatial Masking Consistency that utilizes intrinsic contextual information to enhance the spatial context reasoning for unlabeled data. Extensive experiments on various benchmarks demonstrate the superiority of our approach in both performance and efficiency. For example, with only 10% labeled data on the Left Atrial Segmentation dataset, our method surpasses BCP by 1.43% Jaccard while drastically reducing the FLOPs by 90.8% and parameters by 85.8%.
变形算法在三维医学图像分割中表现出了显著的性能,但其较高的计算量和对大量标记数据的需求限制了其适用性。为了应对这些挑战,我们考虑了两个关键方面:模型效率和数据效率。具体来说,我们提出了Light-UNETR,一种旨在实现模型效率的轻型变压器。Light-UNETR具有轻量级维数减少注意(LIDR)模块,可以减少空间和通道维度,同时通过多分支注意捕获全局和局部特征。此外,我们还引入了紧凑门控线性单元(CGLU),以最小参数选择性地控制通道相互作用。此外,我们引入了上下文协同增强(CSE)学习策略,旨在提高变形金刚的数据效率。它首先利用外部上下文信息来支持注意引导替代对未标记数据的学习,然后应用利用内在上下文信息的空间掩蔽一致性来增强对未标记数据的空间上下文推理。在各种基准测试上进行的大量实验证明了我们的方法在性能和效率方面的优越性。例如,在左心房分割数据集上只有10%的标记数据,我们的方法比BCP高出1.43% Jaccard,同时大幅度降低了90.8%的FLOPs和85.8%的参数。
{"title":"Harnessing Lightweight Transformer With Contextual Synergic Enhancement for Efficient 3D Medical Image Segmentation","authors":"Xinyu Liu;Zhen Chen;Wuyang Li;Chenxin Li;Yixuan Yuan","doi":"10.1109/TPAMI.2025.3640233","DOIUrl":"10.1109/TPAMI.2025.3640233","url":null,"abstract":"Transformers have shown remarkable performance in 3D medical image segmentation, but their high computational requirements and need for large amounts of labeled data limit their applicability. To address these challenges, we consider two crucial aspects: model efficiency and data efficiency. Specifically, we propose Light-UNETR, a lightweight transformer designed to achieve model efficiency. Light-UNETR features a Lightweight Dimension Reductive Attention (LIDR) module, which reduces spatial and channel dimensions while capturing both global and local features via multi-branch attention. Additionally, we introduce a Compact Gated Linear Unit (CGLU) to selectively control channel interaction with minimal parameters. Furthermore, we introduce a Contextual Synergic Enhancement (CSE) learning strategy, which aims to boost the data efficiency of Transformers. It first leverages the <italic>extrinsic contextual information</i> to support the learning of unlabeled data with Attention-Guided Replacement, then applies Spatial Masking Consistency that utilizes <italic>intrinsic contextual information</i> to enhance the spatial context reasoning for unlabeled data. Extensive experiments on various benchmarks demonstrate the superiority of our approach in both performance and efficiency. For example, with only 10% labeled data on the Left Atrial Segmentation dataset, our method surpasses BCP by 1.43% Jaccard while drastically reducing the FLOPs by 90.8% and parameters by 85.8%.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3852-3867"},"PeriodicalIF":18.6,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heatmap Pooling Network for Action Recognition From RGB Videos 用于RGB视频动作识别的热图池网络
IF 18.6 Pub Date : 2025-12-05 DOI: 10.1109/TPAMI.2025.3640697
Mengyuan Liu;Jinfu Liu;Yongkang Jiang;Bin He
Human action recognition (HAR) in videos has garnered widespread attention due to the rich information in RGB videos. Nevertheless, existing methods for extracting deep features from RGB videos face challenges such as information redundancy, susceptibility to noise and high storage costs. To address these issues and fully harness the useful information in videos, we propose a novel heatmap pooling network (HP-Net) for action recognition from videos, which extracts information-rich, robust and concise pooled features of the human body in videos through a feedback pooling module. The extracted pooled features demonstrate obvious performance advantages over the previously obtained pose data and heatmap features from videos. In addition, we design a spatial-motion co-learning module and a text refinement modulation module to integrate the extracted pooled features with other multimodal data, enabling more robust action recognition. Extensive experiments on several benchmarks namely NTU RGB+D 60, NTU RGB+D 120, Toyota-Smarthome and uncrewed aerial vehicles (UAV)-Human consistently verify the effectiveness of our HP-Net, which outperforms the existing human action recognition methods.
由于RGB视频中丰富的信息,视频中的人体动作识别(HAR)得到了广泛的关注。然而,现有的从RGB视频中提取深度特征的方法面临着信息冗余、易受噪声影响和存储成本高等挑战。为了解决这些问题并充分利用视频中的有用信息,我们提出了一种新的用于视频动作识别的热图池化网络(HP-Net),该网络通过反馈池化模块提取视频中信息丰富、鲁棒且简洁的人体池化特征。与先前从视频中获得的姿态数据和热图特征相比,提取的池化特征具有明显的性能优势。此外,我们设计了一个空间-运动共同学习模块和一个文本细化调制模块,将提取的池特征与其他多模态数据相结合,实现更鲁棒的动作识别。在NTU RGB+ d60, NTU RGB+ d120,丰田智能家居和无人驾驶飞行器(UAV)-Human等多个基准上进行的大量实验一致验证了我们的HP-Net的有效性,它优于现有的人类动作识别方法。
{"title":"Heatmap Pooling Network for Action Recognition From RGB Videos","authors":"Mengyuan Liu;Jinfu Liu;Yongkang Jiang;Bin He","doi":"10.1109/TPAMI.2025.3640697","DOIUrl":"10.1109/TPAMI.2025.3640697","url":null,"abstract":"Human action recognition (HAR) in videos has garnered widespread attention due to the rich information in RGB videos. Nevertheless, existing methods for extracting deep features from RGB videos face challenges such as information redundancy, susceptibility to noise and high storage costs. To address these issues and fully harness the useful information in videos, we propose a novel heatmap pooling network (HP-Net) for action recognition from videos, which extracts information-rich, robust and concise pooled features of the human body in videos through a feedback pooling module. The extracted pooled features demonstrate obvious performance advantages over the previously obtained pose data and heatmap features from videos. In addition, we design a spatial-motion co-learning module and a text refinement modulation module to integrate the extracted pooled features with other multimodal data, enabling more robust action recognition. Extensive experiments on several benchmarks namely NTU RGB+D 60, NTU RGB+D 120, Toyota-Smarthome and uncrewed aerial vehicles (UAV)-Human consistently verify the effectiveness of our HP-Net, which outperforms the existing human action recognition methods.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3726-3743"},"PeriodicalIF":18.6,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Spatiotemporal Consistency for Image-to-LiDAR Data Pretraining 增强图像-激光雷达数据预训练的时空一致性
IF 18.6 Pub Date : 2025-12-05 DOI: 10.1109/TPAMI.2025.3640589
Xiang Xu;Lingdong Kong;Hui Shuai;Wenwei Zhang;Liang Pan;Kai Chen;Ziwei Liu;Qingshan Liu
LiDAR representation learning has emerged as a promising approach to reducing reliance on costly and labor-intensive human annotations. While existing methods primarily focus on spatial alignment between LiDAR and camera sensors, they often overlook the temporal dynamics critical for capturing motion and scene continuity in driving scenarios. To address this limitation, we propose SuperFlow++, a novel framework that integrates spatiotemporal cues in both pretraining and downstream tasks using consecutive LiDAR-camera pairs. SuperFlow++ introduces four key components: (1) a view consistency alignment module to unify semantic information across camera views, (2) a dense-to-sparse consistency regularization mechanism to enhance feature robustness across varying point cloud densities, (3) a flow-based contrastive learning approach that models temporal relationships for improved scene understanding, and (4) a temporal voting strategy that propagates semantic information across LiDAR scans to improve prediction consistency. Extensive evaluations on 11 heterogeneous LiDAR datasets demonstrate that SuperFlow++ outperforms state-of-the-art methods across diverse tasks and driving conditions. Furthermore, by scaling both 2D and 3D backbones during pretraining, we uncover emergent properties that provide deeper insights into developing scalable 3D foundation models. With strong generalizability and computational efficiency, SuperFlow++ establishes a new benchmark for data-efficient LiDAR-based perception in autonomous driving.
激光雷达表示学习已经成为一种很有前途的方法,可以减少对昂贵和劳动密集型的人工注释的依赖。虽然现有的方法主要关注激光雷达和相机传感器之间的空间对齐,但它们往往忽略了在驾驶场景中捕捉运动和场景连续性的关键时间动力学。为了解决这一限制,我们提出了SuperFlow++,这是一个新的框架,它在使用连续激光雷达相机对的预训练和下游任务中集成了时空线索。superflow++引入了四个关键组件:(1)视图一致性对齐模块,用于统一相机视图之间的语义信息;(2)密集到稀疏的一致性正则化机制,用于增强不同点云密度之间的特征鲁棒性;(3)基于流的对比学习方法,用于建模时间关系,以提高场景理解;(4)跨激光雷达扫描传播语义信息的时间投票策略,以提高预测一致性。对11个异构LiDAR数据集的广泛评估表明,superflow++在不同任务和驾驶条件下的表现优于最先进的方法。此外,通过在预训练期间缩放2D和3D骨干,我们发现了紧急属性,为开发可扩展的3D基础模型提供了更深入的见解。superflow++具有强大的通用性和计算效率,为自动驾驶中基于激光雷达的数据高效感知建立了新的基准。
{"title":"Enhanced Spatiotemporal Consistency for Image-to-LiDAR Data Pretraining","authors":"Xiang Xu;Lingdong Kong;Hui Shuai;Wenwei Zhang;Liang Pan;Kai Chen;Ziwei Liu;Qingshan Liu","doi":"10.1109/TPAMI.2025.3640589","DOIUrl":"10.1109/TPAMI.2025.3640589","url":null,"abstract":"LiDAR representation learning has emerged as a promising approach to reducing reliance on costly and labor-intensive human annotations. While existing methods primarily focus on spatial alignment between LiDAR and camera sensors, they often overlook the temporal dynamics critical for capturing motion and scene continuity in driving scenarios. To address this limitation, we propose <bold>SuperFlow++</b>, a novel framework that integrates spatiotemporal cues in both pretraining and downstream tasks using consecutive LiDAR-camera pairs. SuperFlow++ introduces four key components: <bold>(1)</b> a view consistency alignment module to unify semantic information across camera views, <bold>(2)</b> a dense-to-sparse consistency regularization mechanism to enhance feature robustness across varying point cloud densities, <bold>(3)</b> a flow-based contrastive learning approach that models temporal relationships for improved scene understanding, and <bold>(4)</b> a temporal voting strategy that propagates semantic information across LiDAR scans to improve prediction consistency. Extensive evaluations on 11 heterogeneous LiDAR datasets demonstrate that SuperFlow++ outperforms state-of-the-art methods across diverse tasks and driving conditions. Furthermore, by scaling both 2D and 3D backbones during pretraining, we uncover emergent properties that provide deeper insights into developing scalable 3D foundation models. With strong generalizability and computational efficiency, SuperFlow++ establishes a new benchmark for data-efficient LiDAR-based perception in autonomous driving.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3819-3834"},"PeriodicalIF":18.6,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Homophily Edge Augment Graph Neural Network for High-Class Homophily Variance Learning 高阶同态方差学习的同态边缘增强图神经网络
IF 18.6 Pub Date : 2025-12-05 DOI: 10.1109/TPAMI.2025.3640635
Mingjian Guang;Rui Zhang;Dawei Cheng;Xiaoyang Wang;Xin Liu;Jie Yang;Yi Ouyang;Xian Wu;Yefeng Zheng
Graph Neural Networks (GNNs) have achieved remarkable success in machine learning tasks by learning the features of graph data. However, experiments show that vanilla GNNs fail to achieve good classification performance in the field of graph anomaly detection. To address this issue, we propose and theoretically prove that the high-Class Homophily Variance (CHV) characteristic is the reason behind the suboptimal performance of GNN models in anomaly detection tasks. Statistical analysis shows that in most standard node classification datasets, homophily levels are similar across all classes, so CHV is low. In contrast, graph anomaly detection datasets have high CHV, as benign nodes are highly homophilic while anomalies are not, leading to a clear separation. To mitigate its impact, we propose a novel GNN model named Homophily Edge Augment Graph Neural Network (HEAug). Different from previous work, our method emphasizes generating new edges with low CHV value, using the original edges as an auxiliary. HEAug samples homophily adjacency matrices from scratch using a self-attention mechanism, and leverages nodes that are relevant in the feature space but not directly connected in the original graph. Additionally, we modify the loss function to punish the generation of unnecessary heterophilic edges by the model. Extensive comparison experiments demonstrate that HEAug achieved the best performance across eight benchmark datasets, including anomaly detection, edgeless node classification and adversarial attack. We also defined a heterophily attack to increase the CHV value in other graphs, demonstrating the effectiveness of our theory and model in various scenarios.
图神经网络(gnn)通过学习图数据的特征,在机器学习任务中取得了显著的成功。然而,实验表明,香草gnn在图异常检测领域没有取得良好的分类性能。为了解决这个问题,我们提出并从理论上证明了高级别同态方差(CHV)特征是GNN模型在异常检测任务中表现不佳的原因。统计分析表明,在大多数标准节点分类数据集中,所有类别的同质性水平相似,因此CHV较低。相反,图异常检测数据集具有高CHV,因为良性节点高度亲同,而异常节点则不是,从而导致明显的分离。为了减轻其影响,我们提出了一种新的GNN模型,称为同态边缘增强图神经网络(HEAug)。与以往的工作不同,我们的方法强调以原始边缘为辅助,生成低CHV值的新边缘。HEAug使用自关注机制从头开始对同态邻接矩阵进行采样,并利用在特征空间中相关但在原始图中没有直接连接的节点。此外,我们修改了损失函数,以惩罚模型产生不必要的亲异性边。大量的对比实验表明,HEAug在异常检测、无边缘节点分类和对抗性攻击等8个基准数据集上取得了最佳性能。我们还定义了异质性攻击来增加其他图中的CHV值,证明了我们的理论和模型在各种场景下的有效性。
{"title":"Homophily Edge Augment Graph Neural Network for High-Class Homophily Variance Learning","authors":"Mingjian Guang;Rui Zhang;Dawei Cheng;Xiaoyang Wang;Xin Liu;Jie Yang;Yi Ouyang;Xian Wu;Yefeng Zheng","doi":"10.1109/TPAMI.2025.3640635","DOIUrl":"10.1109/TPAMI.2025.3640635","url":null,"abstract":"Graph Neural Networks (GNNs) have achieved remarkable success in machine learning tasks by learning the features of graph data. However, experiments show that vanilla GNNs fail to achieve good classification performance in the field of graph anomaly detection. To address this issue, we propose and theoretically prove that the high-Class Homophily Variance (CHV) characteristic is the reason behind the suboptimal performance of GNN models in anomaly detection tasks. Statistical analysis shows that in most standard node classification datasets, homophily levels are similar across all classes, so CHV is low. In contrast, graph anomaly detection datasets have high CHV, as benign nodes are highly homophilic while anomalies are not, leading to a clear separation. To mitigate its impact, we propose a novel GNN model named Homophily Edge Augment Graph Neural Network (<monospace>HEAug</monospace>). Different from previous work, our method emphasizes generating new edges with low CHV value, using the original edges as an auxiliary. <monospace>HEAug</monospace> samples homophily adjacency matrices from scratch using a self-attention mechanism, and leverages nodes that are relevant in the feature space but not directly connected in the original graph. Additionally, we modify the loss function to punish the generation of unnecessary heterophilic edges by the model. Extensive comparison experiments demonstrate that <monospace>HEAug</monospace> achieved the best performance across eight benchmark datasets, including anomaly detection, edgeless node classification and adversarial attack. We also defined a heterophily attack to increase the CHV value in other graphs, demonstrating the effectiveness of our theory and model in various scenarios.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3835-3851"},"PeriodicalIF":18.6,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two Decades of Multi-View Clustering: Taxonomy, Application, and Challenge 二十年的多视图聚类:分类、应用和挑战
IF 18.6 Pub Date : 2025-12-04 DOI: 10.1109/TPAMI.2025.3640109
Xinwang Liu;Ke Liang;Jun Wang;Suyuan Liu;Xiangke Wang;Huaimin Wang
Multi-view clustering (MVC), as an important machine learning task, aims to group data into distinct groups by leveraging complementary and consistent information across multiple views. During the last two decades, it has been widely studied, and many methods have been proposed, which has brought incredible development to this field. However, few works comprehensively summarize existing methods and point out the potential challenges in this field for the next decades. To this end, our survey thoroughly reviews existing MVC methods according to three taxonomies, i.e., techniques, fusion strategies, and scenarios. Specifically, seven typical techniques, four fusion strategies, and five typical scenarios are included. Besides, we also collect the commonly used datasets and analyze the performance of typical MVC methods. Moreover, we summarize six application scenarios of existing MVC methods ranging from computer vision, and information retrieval tasks to medical diagnosis and bio-informatics. In particular, we point out seven interesting future directions in this field, which will definitely enlighten the readers.
多视图聚类(MVC)作为一项重要的机器学习任务,旨在通过利用多个视图之间互补和一致的信息,将数据分组为不同的组。在过去的二十年里,人们对它进行了广泛的研究,提出了许多方法,给这一领域带来了令人难以置信的发展。然而,很少有著作全面总结现有的方法,并指出未来几十年该领域的潜在挑战。为此,我们的调查根据三种分类,即技术、融合策略和场景,彻底回顾了现有的MVC方法。具体而言,包括七种典型技术、四种融合策略和五种典型场景。此外,我们还收集了常用的数据集,分析了典型MVC方法的性能。此外,我们总结了现有MVC方法的六种应用场景,从计算机视觉、信息检索任务到医学诊断和生物信息学。我们特别指出了这一领域的七个有趣的未来方向,一定会给读者带来启发。
{"title":"Two Decades of Multi-View Clustering: Taxonomy, Application, and Challenge","authors":"Xinwang Liu;Ke Liang;Jun Wang;Suyuan Liu;Xiangke Wang;Huaimin Wang","doi":"10.1109/TPAMI.2025.3640109","DOIUrl":"10.1109/TPAMI.2025.3640109","url":null,"abstract":"Multi-view clustering (MVC), as an important machine learning task, aims to group data into distinct groups by leveraging complementary and consistent information across multiple views. During the last two decades, it has been widely studied, and many methods have been proposed, which has brought incredible development to this field. However, few works comprehensively summarize existing methods and point out the potential challenges in this field for the next decades. To this end, our survey thoroughly reviews existing MVC methods according to three taxonomies, i.e., techniques, fusion strategies, and scenarios. Specifically, seven typical techniques, four fusion strategies, and five typical scenarios are included. Besides, we also collect the commonly used datasets and analyze the performance of typical MVC methods. Moreover, we summarize six application scenarios of existing MVC methods ranging from computer vision, and information retrieval tasks to medical diagnosis and bio-informatics. In particular, we point out seven interesting future directions in this field, which will definitely enlighten the readers.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3744-3764"},"PeriodicalIF":18.6,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Correspondence: Unified Benchmarking and a Strong Baseline 语义对应:统一基准和强基线
IF 18.6 Pub Date : 2025-12-04 DOI: 10.1109/TPAMI.2025.3640429
Kaiyan Zhang;Xinghui Li;Jingyi Lu;Kai Han
Establishing semantic correspondence is a challenging task in computer vision, aiming to match keypoints with the same semantic information across different images. Benefiting from the rapid development of deep learning, remarkable progress has been made over the past decade. However, a comprehensive review and analysis of this task remains absent. In this paper, we present the first extensive survey of semantic correspondence methods. We first propose a taxonomy to classify existing methods based on the type of their method designs. These methods are then categorized accordingly, and we provide a detailed analysis of each approach. Furthermore, we aggregate and summarize the results of methods in the literature across various benchmarks into a unified comparative table, with detailed configurations to highlight performance variations. Additionally, to provide a detailed understanding of existing methods for semantic matching, we thoroughly conduct controlled experiments to analyze the effectiveness of the components of different methods. Finally, we propose a simple yet effective baseline that achieves state-of-the-art performance on multiple benchmarks, providing a solid foundation for future research in this field. We hope this survey serves as a comprehensive reference and consolidated baseline for future development.
建立语义对应关系是计算机视觉中的一项具有挑战性的任务,其目的是在不同图像中匹配具有相同语义信息的关键点。得益于深度学习的快速发展,在过去十年中取得了令人瞩目的进步。然而,对这项任务的全面审查和分析仍然缺乏。在本文中,我们提出了语义对应方法的第一个广泛的调查。我们首先提出了一种分类法,根据方法设计的类型对现有方法进行分类。然后对这些方法进行相应的分类,并对每种方法进行详细的分析。此外,我们将文献中各种基准测试方法的结果汇总并总结到一个统一的比较表中,并使用详细的配置来突出性能变化。此外,为了详细了解现有的语义匹配方法,我们进行了全面的对照实验,分析了不同方法组成部分的有效性。最后,我们提出了一个简单而有效的基线,可以在多个基准上实现最先进的性能,为该领域的未来研究提供坚实的基础。我们希望这项调查能为未来的发展提供全面的参考和统一的基准。
{"title":"Semantic Correspondence: Unified Benchmarking and a Strong Baseline","authors":"Kaiyan Zhang;Xinghui Li;Jingyi Lu;Kai Han","doi":"10.1109/TPAMI.2025.3640429","DOIUrl":"10.1109/TPAMI.2025.3640429","url":null,"abstract":"Establishing semantic correspondence is a challenging task in computer vision, aiming to match keypoints with the same semantic information across different images. Benefiting from the rapid development of deep learning, remarkable progress has been made over the past decade. However, a comprehensive review and analysis of this task remains absent. In this paper, we present the first extensive survey of semantic correspondence methods. We first propose a taxonomy to classify existing methods based on the type of their method designs. These methods are then categorized accordingly, and we provide a detailed analysis of each approach. Furthermore, we aggregate and summarize the results of methods in the literature across various benchmarks into a unified comparative table, with detailed configurations to highlight performance variations. Additionally, to provide a detailed understanding of existing methods for semantic matching, we thoroughly conduct controlled experiments to analyze the effectiveness of the components of different methods. Finally, we propose a simple yet effective baseline that achieves state-of-the-art performance on multiple benchmarks, providing a solid foundation for future research in this field. We hope this survey serves as a comprehensive reference and consolidated baseline for future development.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3911-3930"},"PeriodicalIF":18.6,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward an Advanced Temporal Graph Network in Hyperbolic Space 双曲空间中一种高级时间图网络
IF 18.6 Pub Date : 2025-12-04 DOI: 10.1109/TPAMI.2025.3640172
Viet Quan Le;Viet Cuong Ta
Learning over dynamic graphs poses major challenges, including capturing the evolving relationship in the graphs. Inspired by the advantages of hyperbolic embedding in static graphs, the hyperbolic space is expected to capture complex interactions in dynamic graphs. However, due to the distortion errors in the standard tangent space mappings, hyperbolic methods become more sensitive to noise and reduce the learning capacity. To address the distortion in tangent space, we proposed HMPTGN, a temporal graph network that operates directly on the hyperbolic manifold. In this journal paper, we introduce the HMPTGN+ architecture, an extension of the original HMPTGN with major updates to learn better representations of dynamic graphs based on the hyperbolic embedding. Our framework incorporates a high-order graph neural network for extracting spatial dependencies, a dilated causal attention mechanism for modeling temporal patterns while preserving causality, and a curvature-awareness mechanism to capture dynamic structures. Extensive experiments demonstrate the effectiveness of our proposed HMPTGN+ framework over state-of-the-art baselines in both temporal link prediction and temporal new link prediction tasks.
在动态图上学习带来了很大的挑战,包括捕捉图中不断变化的关系。受静态图中双曲嵌入优势的启发,双曲空间有望捕获动态图中的复杂交互。然而,由于标准切空间映射的失真误差,双曲方法对噪声更敏感,降低了学习能力。为了解决切空间中的畸变,我们提出了HMPTGN,一种直接作用于双曲流形的时间图网络。在这篇期刊论文中,我们介绍了HMPTGN+架构,它是原始HMPTGN的扩展,并进行了重大更新,以学习基于双曲嵌入的动态图的更好表示。我们的框架包含一个用于提取空间依赖性的高阶图神经网络,一个用于在保留因果关系的同时建模时间模式的扩展因果注意机制,以及一个用于捕获动态结构的曲率感知机制。大量的实验证明了我们提出的HMPTGN+框架在时间链路预测和时间新链路预测任务中的有效性。
{"title":"Toward an Advanced Temporal Graph Network in Hyperbolic Space","authors":"Viet Quan Le;Viet Cuong Ta","doi":"10.1109/TPAMI.2025.3640172","DOIUrl":"10.1109/TPAMI.2025.3640172","url":null,"abstract":"Learning over dynamic graphs poses major challenges, including capturing the evolving relationship in the graphs. Inspired by the advantages of hyperbolic embedding in static graphs, the hyperbolic space is expected to capture complex interactions in dynamic graphs. However, due to the distortion errors in the standard tangent space mappings, hyperbolic methods become more sensitive to noise and reduce the learning capacity. To address the distortion in tangent space, we proposed HMPTGN, a temporal graph network that operates directly on the hyperbolic manifold. In this journal paper, we introduce the HMPTGN+ architecture, an extension of the original HMPTGN with major updates to learn better representations of dynamic graphs based on the hyperbolic embedding. Our framework incorporates a high-order graph neural network for extracting spatial dependencies, a dilated causal attention mechanism for modeling temporal patterns while preserving causality, and a curvature-awareness mechanism to capture dynamic structures. Extensive experiments demonstrate the effectiveness of our proposed HMPTGN+ framework over state-of-the-art baselines in both <italic>temporal link prediction</i> and <italic>temporal new link prediction</i> tasks.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3868-3884"},"PeriodicalIF":18.6,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Driven Neural Compensation for Electrical Impedance Tomography 电阻抗断层扫描的物理驱动神经补偿
IF 18.6 Pub Date : 2025-12-03 DOI: 10.1109/TPAMI.2025.3639647
Chuyu Wang;Huiting Deng;Dong Liu
Electrical Impedance Tomography (EIT) provides a non-invasive, portable imaging modality with significant potential in medical and industrial applications. Despite its advantages, EIT encounters two primary challenges: the ill-posed nature of its inverse problem and the spatially variable, location-dependent sensitivity distribution. Traditional model-based methods mitigate ill-posedness through regularization but overlook sensitivity variability, while supervised deep learning approaches require extensive training data and lack generalization. Recent developments in neural fields have introduced implicit regularization techniques for image reconstruction; however, these methods often overlook the physical principles underlying EIT, thereby limiting their effectiveness. In this study, we propose PhyNC (Physics-driven Neural Compensation), an unsupervised deep learning framework that incorporates the physical principles of EIT. PhyNC addresses both the ill-posed inverse problem and the sensitivity distribution by dynamically allocating neural representational capacity to regions with lower sensitivity, ensuring accurate and balanced conductivity reconstructions. Extensive evaluations on both simulated and experimental data demonstrate that PhyNC outperforms existing methods in terms of detail preservation and artifact resistance, particularly in low-sensitivity regions. Our approach enhances the robustness of EIT reconstructions and provides a flexible framework that can be adapted to other imaging modalities with similar challenges.
电阻抗断层扫描(EIT)提供了一种非侵入性的便携式成像方式,在医疗和工业应用中具有巨大的潜力。尽管EIT具有优势,但它面临两个主要挑战:逆问题的病态性质和空间变量、位置依赖的灵敏度分布。传统的基于模型的方法通过正则化来缓解不适定性,但忽略了灵敏度变异性,而监督深度学习方法需要大量的训练数据并且缺乏泛化。神经领域的最新发展引入了用于图像重建的隐式正则化技术;然而,这些方法往往忽略了EIT背后的物理原理,从而限制了它们的有效性。在本研究中,我们提出了PhyNC(物理驱动的神经补偿),这是一个融合了EIT物理原理的无监督深度学习框架。PhyNC通过将神经表征能力动态分配到灵敏度较低的区域,解决了不适定逆问题和灵敏度分布,确保了准确和平衡的电导率重建。对模拟和实验数据的广泛评估表明,PhyNC在细节保存和伪影抵抗方面优于现有方法,特别是在低灵敏度区域。我们的方法增强了EIT重建的鲁棒性,并提供了一个灵活的框架,可以适应其他具有类似挑战的成像模式。
{"title":"Physics-Driven Neural Compensation for Electrical Impedance Tomography","authors":"Chuyu Wang;Huiting Deng;Dong Liu","doi":"10.1109/TPAMI.2025.3639647","DOIUrl":"10.1109/TPAMI.2025.3639647","url":null,"abstract":"Electrical Impedance Tomography (EIT) provides a non-invasive, portable imaging modality with significant potential in medical and industrial applications. Despite its advantages, EIT encounters two primary challenges: the ill-posed nature of its inverse problem and the spatially variable, location-dependent sensitivity distribution. Traditional model-based methods mitigate ill-posedness through regularization but overlook sensitivity variability, while supervised deep learning approaches require extensive training data and lack generalization. Recent developments in neural fields have introduced implicit regularization techniques for image reconstruction; however, these methods often overlook the physical principles underlying EIT, thereby limiting their effectiveness. In this study, we propose PhyNC (Physics-driven Neural Compensation), an unsupervised deep learning framework that incorporates the physical principles of EIT. PhyNC addresses both the ill-posed inverse problem and the sensitivity distribution by dynamically allocating neural representational capacity to regions with lower sensitivity, ensuring accurate and balanced conductivity reconstructions. Extensive evaluations on both simulated and experimental data demonstrate that PhyNC outperforms existing methods in terms of detail preservation and artifact resistance, particularly in low-sensitivity regions. Our approach enhances the robustness of EIT reconstructions and provides a flexible framework that can be adapted to other imaging modalities with similar challenges.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3783-3800"},"PeriodicalIF":18.6,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145664583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Scale 3D Medical Image Pre-Training With Geometric Context Priors 基于几何背景先验的大规模三维医学图像预训练。
IF 18.6 Pub Date : 2025-12-03 DOI: 10.1109/TPAMI.2025.3639593
Linshan Wu;Jiaxin Zhuang;Hao Chen
The scarcity of annotations poses a significant challenge in medical image analysis, which demands extensive efforts from radiologists, especially for high-dimension 3D medical images. Large-scale pre-training has emerged as a promising label-efficient solution, owing to the utilization of large-scale data, large models, and advanced pre-training techniques. However, its development in medical images remains underexplored. The primary challenge lies in harnessing large-scale unlabeled data and learning high-level semantics without annotations. We observe that 3D medical images exhibit consistent geometric context, i.e., consistent geometric relations between different organs, which leads to a promising way for learning consistent representations. Motivated by this, we introduce a simple-yet-effective Volume Contrast (VoCo) framework to leverage geometric context priors for self-supervision. Given an input volume, we extract base crops from different regions to construct positive and negative pairs for contrastive learning. Then we predict the contextual position of a random crop by contrasting its similarity to the base crops. In this way, VoCo implicitly encodes the inherent geometric context into model representations, facilitating high-level semantic learning without annotations. To assess effectiveness, we (1) introduce PreCT-160 K, the largest medical image pre-training dataset to date, which comprises 160 K Computed Tomography (CT) volumes covering diverse anatomic structures; (2) investigate scaling laws and propose guidelines for tailoring different model sizes to various medical tasks; (3) build a comprehensive benchmark encompassing 51 medical tasks, including segmentation, classification, registration, and vision-language. Extensive experiments highlight the superiority of VoCo, showcasing promising transferability to unseen modalities and datasets. VoCo notably enhances performance on datasets with limited labeled cases and significantly expedites fine-tuning convergence.
医学图像的分析需要放射科医生付出大量的努力,尤其是对高维三维医学图像的分析,而注释的缺乏给医学图像分析带来了巨大的挑战。由于大规模数据、大型模型和先进的预训练技术的使用,大规模预训练已经成为一种很有前途的标签高效解决方案。然而,它在医学图像中的发展仍未得到充分探索。主要的挑战在于利用大规模的未标记数据和学习没有注释的高级语义。我们观察到三维医学图像具有一致的几何背景,即不同器官之间具有一致的几何关系,这为学习一致表征提供了一种很有前途的方法。受此启发,我们引入了一个简单而有效的体积对比(VoCo)框架,利用几何背景先验进行自我监督。在给定输入量的情况下,我们从不同的区域提取基础作物,构建正负对进行对比学习。然后我们通过对比随机作物与基础作物的相似性来预测其上下文位置。通过这种方式,VoCo隐式地将固有的几何上下文编码为模型表示,促进了无需注释的高级语义学习。为了评估有效性,我们(1)引入PreCT-160 K,这是迄今为止最大的医学图像预训练数据集,其中包括涵盖不同解剖结构的160 K计算机断层扫描(CT)卷;(2)研究缩放规律,并提出针对不同医疗任务定制不同模型尺寸的指导方针;(3)构建包含51个医疗任务的综合基准,包括分割、分类、配准和视觉语言。大量的实验突出了VoCo的优越性,展示了对未知模式和数据集的有希望的可转移性。VoCo显著提高了有限标注案例数据集的性能,并显著加快了微调收敛速度。代码、数据集和模型可在https://github.com/Luffy03/Large-Scale-Medical上获得。
{"title":"Large-Scale 3D Medical Image Pre-Training With Geometric Context Priors","authors":"Linshan Wu;Jiaxin Zhuang;Hao Chen","doi":"10.1109/TPAMI.2025.3639593","DOIUrl":"10.1109/TPAMI.2025.3639593","url":null,"abstract":"The scarcity of annotations poses a significant challenge in medical image analysis, which demands extensive efforts from radiologists, especially for high-dimension 3D medical images. Large-scale pre-training has emerged as a promising label-efficient solution, owing to the utilization of large-scale data, large models, and advanced pre-training techniques. However, its development in medical images remains underexplored. The primary challenge lies in harnessing large-scale unlabeled data and learning high-level semantics without annotations. We observe that 3D medical images exhibit consistent geometric context, i.e., consistent geometric relations between different organs, which leads to a promising way for learning consistent representations. Motivated by this, we introduce a simple-yet-effective <bold>Vo</b>lume <bold>Co</b>ntrast (<bold>VoCo</b>) framework to leverage geometric context priors for self-supervision. Given an input volume, we extract base crops from different regions to construct positive and negative pairs for contrastive learning. Then we predict the contextual position of a random crop by contrasting its similarity to the base crops. In this way, VoCo implicitly encodes the inherent geometric context into model representations, facilitating high-level semantic learning without annotations. To assess effectiveness, we <bold>(1)</b> introduce PreCT-160 K, the largest medical image pre-training dataset to date, which comprises 160 K Computed Tomography (CT) volumes covering diverse anatomic structures; <bold>(2)</b> investigate scaling laws and propose guidelines for tailoring different model sizes to various medical tasks; <bold>(3)</b> build a comprehensive benchmark encompassing 51 medical tasks, including segmentation, classification, registration, and vision-language. Extensive experiments highlight the superiority of VoCo, showcasing promising transferability to unseen modalities and datasets. VoCo notably enhances performance on datasets with limited labeled cases and significantly expedites fine-tuning convergence.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"48 3","pages":"3801-3818"},"PeriodicalIF":18.6,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145664266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1