首页 > 最新文献

IEEE transactions on pattern analysis and machine intelligence最新文献

英文 中文
Mine yOur owN Anatomy: Revisiting Medical Image Segmentation With Extremely Limited Labels 挖掘自己的解剖图利用极其有限的标签重新审视医学图像分割
Pub Date : 2024-09-13 DOI: 10.1109/TPAMI.2024.3461321
Chenyu You;Weicheng Dai;Fenglin Liu;Yifei Min;Nicha C. Dvornek;Xiaoxiao Li;David A. Clifton;Lawrence Staib;James S. Duncan
Recent studies on contrastive learning have achieved remarkable performance solely by leveraging few labels in medical image segmentation. Existing methods mainly focus on instance discrimination and invariant mapping. However, they face three common pitfalls: (1) tailness: medical image data usually follows an implicit long-tail class distribution. Blindly leveraging all pixels in training hence can lead to the data imbalance issues, and cause deteriorated performance; (2) consistency: it remains unclear whether a segmentation model has learned meaningful and yet consistent anatomical features due to the intra-class variations between different anatomical features; and (3) diversity: the intra-slice correlations within the entire dataset have received significantly less attention. This motivates us to seek a principled approach for strategically making use of the dataset itself to discover similar yet distinct samples from different anatomical views. In this paper, we introduce a novel semi-supervised medical image segmentation framework termed Mine yOur owN Anatomy (MONA), and make three contributions. First, prior work argues that every pixel equally matters to the training; we observe empirically that this alone is unlikely to define meaningful anatomical features, mainly due to lacking the supervision signal. We show two simple solutions towards learning invariances. Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features in an unsupervised manner. Lastly, we both empirically and theoretically, demonstrate the efficacy of our MONA on three benchmark datasets with different labeled settings, achieving new state-of-the-art under different labeled semi-supervised settings.
最近关于对比学习的研究仅通过在医学图像分割中利用少量标签就取得了显著的性能。现有方法主要侧重于实例判别和不变映射。然而,它们面临着三个常见的缺陷:(1)尾部性:医学图像数据通常遵循隐含的长尾类分布。因此,盲目利用训练中的所有像素可能会导致数据不平衡问题,并导致性能下降;(2) 一致性:由于不同解剖特征之间存在类内差异,因此尚不清楚分割模型是否学习到了有意义且一致的解剖特征;(3) 多样性:整个数据集中的片内相关性受到的关注明显较少。这促使我们寻求一种有原则的方法,战略性地利用数据集本身来发现来自不同解剖视图的相似而又不同的样本。在本文中,我们介绍了一种新颖的半监督医学图像分割框架--Mine yOur owN Anatomy (MONA),并做出了三项贡献。首先,之前的工作认为每个像素对训练都同样重要;我们根据经验观察到,仅凭这一点不太可能定义有意义的解剖学特征,主要原因是缺乏监督信号。我们展示了学习不变量的两种简单解决方案。其次,我们构建了一套目标,鼓励模型能够以无监督的方式将医学图像分解为一系列解剖特征。最后,我们从经验和理论两方面证明了我们的 MONA 在三个基准数据集上的功效,这些数据集具有不同的标记设置,在不同的标记半监督设置下达到了新的一流水平。
{"title":"Mine yOur owN Anatomy: Revisiting Medical Image Segmentation With Extremely Limited Labels","authors":"Chenyu You;Weicheng Dai;Fenglin Liu;Yifei Min;Nicha C. Dvornek;Xiaoxiao Li;David A. Clifton;Lawrence Staib;James S. Duncan","doi":"10.1109/TPAMI.2024.3461321","DOIUrl":"10.1109/TPAMI.2024.3461321","url":null,"abstract":"Recent studies on contrastive learning have achieved remarkable performance solely by leveraging few labels in medical image segmentation. Existing methods mainly focus on instance discrimination and invariant mapping. However, they face three common pitfalls: (1) tailness: medical image data usually follows an implicit long-tail class distribution. Blindly leveraging all pixels in training hence can lead to the data imbalance issues, and cause deteriorated performance; (2) consistency: it remains unclear whether a segmentation model has learned meaningful and yet consistent anatomical features due to the intra-class variations between different anatomical features; and (3) diversity: the intra-slice correlations within the entire dataset have received significantly less attention. This motivates us to seek a principled approach for strategically making use of the dataset itself to discover similar yet distinct samples from different anatomical views. In this paper, we introduce a novel semi-supervised medical image segmentation framework termed Mine y\u0000<bold>O</b>\u0000ur ow\u0000<bold>N</b>\u0000 Anatomy (\u0000<sc>MONA</small>\u0000), and make three contributions. First, prior work argues that every pixel equally matters to the training; we observe empirically that this alone is unlikely to define meaningful anatomical features, mainly due to lacking the supervision signal. We show two simple solutions towards learning invariances. Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features in an unsupervised manner. Lastly, we both empirically and theoretically, demonstrate the efficacy of our \u0000<sc>MONA</small>\u0000 on three benchmark datasets with different labeled settings, achieving new state-of-the-art under different labeled semi-supervised settings.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11136-11151"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tuning Vision-Language Models With Multiple Prototypes Clustering 利用多原型聚类调整视觉语言模型
Pub Date : 2024-09-13 DOI: 10.1109/TPAMI.2024.3460180
Meng-Hao Guo;Yi Zhang;Tai-Jiang Mu;Sharon X. Huang;Shi-Min Hu
Benefiting from advances in large-scale pre-training, foundation models, have demonstrated remarkable capability in the fields of natural language processing, computer vision, among others. However, to achieve expert-level performance in specific applications, such models often need to be fine-tuned with domain-specific knowledge. In this paper, we focus on enabling vision-language models to unleash more potential for visual understanding tasks under few-shot tuning. Specifically, we propose a novel adapter, dubbed as lusterAdapter, which is based on trainable multiple prototypes clustering algorithm, for tuning the CLIP model. It can not only alleviate the concern of catastrophic forgetting of foundation models by introducing anchors to inherit common knowledge, but also improve the utilization efficiency of few annotated samples via bringing in clustering and domain priors, thereby improving the performance of few-shot tuning. We have conducted extensive experiments on 11 common classification benchmarks. The results show our method significantly surpasses the original CLIP and achieves state-of-the-art (SOTA) performance under all benchmarks and settings. For example, under the 16-shot setting, our method exhibits a remarkable improvement over the original CLIP by 19.6%, and also surpasses TIP-Adapter and GraphAdapter by 2.7% and 2.2%, respectively, in terms of average accuracy across the 11 benchmarks.
得益于大规模预训练的进步,基础模型在自然语言处理、计算机视觉等领域表现出了卓越的能力。然而,要在特定应用中实现专家级性能,这些模型往往需要利用特定领域的知识进行微调。在本文中,我们将重点关注如何让视觉语言模型在少量调整的情况下为视觉理解任务释放更多潜能。具体来说,我们提出了一种基于可训练的多原型聚类算法的新型适配器(称为 lusterAdapter),用于调整 CLIP 模型。它不仅可以通过引入锚来继承常识,从而减轻对基础模型灾难性遗忘的担忧,还可以通过引入聚类和领域先验来提高对少量注释样本的利用效率,从而改善少量调优的性能。我们在 11 个常见分类基准上进行了大量实验。结果表明,在所有基准和设置下,我们的方法都明显优于原始的 CLIP,并达到了最先进(SOTA)的性能。例如,在 16 发设置下,我们的方法比原始 CLIP 明显提高了 19.6%,在 11 个基准的平均准确率方面,也分别比 TIP-Adapter 和 GraphAdapter 高出 2.7% 和 2.2%。
{"title":"Tuning Vision-Language Models With Multiple Prototypes Clustering","authors":"Meng-Hao Guo;Yi Zhang;Tai-Jiang Mu;Sharon X. Huang;Shi-Min Hu","doi":"10.1109/TPAMI.2024.3460180","DOIUrl":"10.1109/TPAMI.2024.3460180","url":null,"abstract":"Benefiting from advances in large-scale pre-training, foundation models, have demonstrated remarkable capability in the fields of natural language processing, computer vision, among others. However, to achieve expert-level performance in specific applications, such models often need to be fine-tuned with domain-specific knowledge. In this paper, we focus on enabling vision-language models to unleash more potential for visual understanding tasks under few-shot tuning. Specifically, we propose a novel adapter, dubbed as lusterAdapter, which is based on trainable multiple prototypes clustering algorithm, for tuning the CLIP model. It can not only alleviate the concern of catastrophic forgetting of foundation models by introducing anchors to inherit common knowledge, but also improve the utilization efficiency of few annotated samples via bringing in clustering and domain priors, thereby improving the performance of few-shot tuning. We have conducted extensive experiments on 11 common classification benchmarks. The results show our method significantly surpasses the original CLIP and achieves state-of-the-art (SOTA) performance under all benchmarks and settings. For example, under the 16-shot setting, our method exhibits a remarkable improvement over the original CLIP by 19.6%, and also surpasses TIP-Adapter and GraphAdapter by 2.7% and 2.2%, respectively, in terms of average accuracy across the 11 benchmarks.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11186-11199"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Dimensional Gradient Helps Out-of-Distribution Detection 低维梯度有助于分布外检测
Pub Date : 2024-09-12 DOI: 10.1109/TPAMI.2024.3459988
Yingwen Wu;Tao Li;Xinwen Cheng;Jie Yang;Xiaolin Huang
Detecting out-of-distribution (OOD) samples is essential for ensuring the reliability of deep neural networks (DNNs) in real-world scenarios. While previous research has predominantly investigated the disparity between in-distribution (ID) and OOD data through forward information analysis, the discrepancy in parameter gradients during the backward process of DNNs has received insufficient attention. Existing studies on gradient disparities mainly focus on the utilization of gradient norms, neglecting the wealth of information embedded in gradient directions. To bridge this gap, in this paper, we conduct a comprehensive investigation into leveraging the entirety of gradient information for OOD detection. The primary challenge arises from the high dimensionality of gradients due to the large number of network parameters. To solve this problem, we propose performing linear dimension reduction on the gradient using a designated subspace that comprises principal components. This innovative technique enables us to obtain a low-dimensional representation of the gradient with minimal information loss. Subsequently, by integrating the reduced gradient with various existing detection score functions, our approach demonstrates superior performance across a wide range of detection tasks. For instance, on the ImageNet benchmark with ResNet50 model, our method achieves an average reduction of 11.15$%$ in the false positive rate at 95$%$ recall (FPR95) compared to the current state-of-the-art approach.
要确保深度神经网络(DNN)在真实世界场景中的可靠性,检测分布外(OOD)样本至关重要。以往的研究主要通过前向信息分析来研究分布内(ID)和分布外(OOD)数据之间的差异,而 DNNs 后向过程中参数梯度的差异却没有得到足够的重视。现有的梯度差异研究主要关注梯度规范的利用,而忽视了梯度方向所蕴含的丰富信息。为了弥补这一不足,我们在本文中对如何利用梯度信息的全部内容进行 OOD 检测进行了全面研究。主要挑战来自于大量网络参数导致的梯度高维度。为解决这一问题,我们建议使用由主成分组成的指定子空间对梯度进行线性降维。这一创新技术使我们能够以最小的信息损失获得梯度的低维表示。随后,通过将降低的梯度与现有的各种检测评分函数进行整合,我们的方法在各种检测任务中都表现出了卓越的性能。例如,在使用 ResNet50 模型的 ImageNet 基准上,与当前最先进的方法相比,我们的方法在 95% 的召回率(FPR95)下平均降低了 11.15% 的误报率。
{"title":"Low-Dimensional Gradient Helps Out-of-Distribution Detection","authors":"Yingwen Wu;Tao Li;Xinwen Cheng;Jie Yang;Xiaolin Huang","doi":"10.1109/TPAMI.2024.3459988","DOIUrl":"10.1109/TPAMI.2024.3459988","url":null,"abstract":"Detecting out-of-distribution (OOD) samples is essential for ensuring the reliability of deep neural networks (DNNs) in real-world scenarios. While previous research has predominantly investigated the disparity between in-distribution (ID) and OOD data through forward information analysis, the discrepancy in parameter gradients during the backward process of DNNs has received insufficient attention. Existing studies on gradient disparities mainly focus on the utilization of gradient norms, neglecting the wealth of information embedded in gradient directions. To bridge this gap, in this paper, we conduct a comprehensive investigation into leveraging the entirety of gradient information for OOD detection. The primary challenge arises from the high dimensionality of gradients due to the large number of network parameters. To solve this problem, we propose performing linear dimension reduction on the gradient using a designated subspace that comprises principal components. This innovative technique enables us to obtain a low-dimensional representation of the gradient with minimal information loss. Subsequently, by integrating the reduced gradient with various existing detection score functions, our approach demonstrates superior performance across a wide range of detection tasks. For instance, on the ImageNet benchmark with ResNet50 model, our method achieves an average reduction of 11.15\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000 in the false positive rate at 95\u0000<inline-formula><tex-math>$%$</tex-math></inline-formula>\u0000 recall (FPR95) compared to the current state-of-the-art approach.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11378-11391"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ES-GNN: Generalizing Graph Neural Networks Beyond Homophily With Edge Splitting ES-GNN:利用边缘分割实现超越同源性的图神经网络泛化
Pub Date : 2024-09-12 DOI: 10.1109/TPAMI.2024.3459932
Jingwei Guo;Kaizhu Huang;Rui Zhang;Xinping Yi
While Graph Neural Networks (GNNs) have achieved enormous success in multiple graph analytical tasks, modern variants mostly rely on the strong inductive bias of homophily. However, real-world networks typically exhibit both homophilic and heterophilic linking patterns, wherein adjacent nodes may share dissimilar attributes and distinct labels. Therefore, GNNs smoothing node proximity holistically may aggregate both task-relevant and irrelevant (even harmful) information, limiting their ability to generalize to heterophilic graphs and potentially causing non-robustness. In this work, we propose a novel Edge Splitting GNN (ES-GNN) framework to adaptively distinguish between graph edges either relevant or irrelevant to learning tasks. This essentially transfers the original graph into two subgraphs with the same node set but complementary edge sets dynamically. Given that, information propagation separately on these subgraphs and edge splitting are alternatively conducted, thus disentangling the task-relevant and irrelevant features. Theoretically, we show that our ES-GNN can be regarded as a solution to a disentangled graph denoising problem, which further illustrates our motivations and interprets the improved generalization beyond homophily. Extensive experiments over 11 benchmark and 1 synthetic datasets not only demonstrate the effective performance of ES-GNN but also highlight its robustness to adversarial graphs and mitigation of the over-smoothing problem.
虽然图神经网络(GNN)在多种图分析任务中取得了巨大成功,但现代变体大多依赖于同亲性的强烈归纳偏差。然而,现实世界中的网络通常同时表现出同亲和异亲的链接模式,其中相邻节点可能共享不同的属性和不同的标签。因此,整体平滑节点邻近性的 GNN 可能会同时聚合与任务相关的信息和不相关(甚至有害)的信息,从而限制了它们对异亲图的泛化能力,并可能导致不稳定性。在这项工作中,我们提出了一种新颖的边缘分割 GNN(ES-GNN)框架,用于自适应地区分与学习任务相关或不相关的图边缘。这实质上是将原始图动态地转换为节点集相同但边缘集互补的两个子图。在此基础上,对这些子图分别进行信息传播和边缘分割,从而将任务相关和不相关的特征区分开来。从理论上讲,我们的 ES-GNN 可以被看作是对分离图去噪问题的一种解决方案,这进一步说明了我们的动机,并解释了超越同源性的改进泛化。在 11 个基准数据集和 1 个合成数据集上进行的广泛实验不仅证明了 ES-GNN 的有效性能,还突出了它对对抗性图的鲁棒性以及对过度平滑问题的缓解。
{"title":"ES-GNN: Generalizing Graph Neural Networks Beyond Homophily With Edge Splitting","authors":"Jingwei Guo;Kaizhu Huang;Rui Zhang;Xinping Yi","doi":"10.1109/TPAMI.2024.3459932","DOIUrl":"10.1109/TPAMI.2024.3459932","url":null,"abstract":"While Graph Neural Networks (GNNs) have achieved enormous success in multiple graph analytical tasks, modern variants mostly rely on the strong inductive bias of homophily. However, real-world networks typically exhibit both homophilic and heterophilic linking patterns, wherein adjacent nodes may share dissimilar attributes and distinct labels. Therefore, GNNs smoothing node proximity holistically may aggregate both task-relevant and irrelevant (even harmful) information, limiting their ability to generalize to heterophilic graphs and potentially causing non-robustness. In this work, we propose a novel Edge Splitting GNN (ES-GNN) framework to adaptively distinguish between graph edges either relevant or irrelevant to learning tasks. This essentially transfers the original graph into two subgraphs with the same node set but complementary edge sets dynamically. Given that, information propagation separately on these subgraphs and edge splitting are alternatively conducted, thus disentangling the task-relevant and irrelevant features. Theoretically, we show that our ES-GNN can be regarded as a solution to a \u0000<italic>disentangled graph denoising problem</i>\u0000, which further illustrates our motivations and interprets the improved generalization beyond homophily. Extensive experiments over 11 benchmark and 1 synthetic datasets not only demonstrate the effective performance of ES-GNN but also highlight its robustness to adversarial graphs and mitigation of the over-smoothing problem.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11345-11360"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly-Supervised Depth Estimation and Image Deblurring via Dual-Pixel Sensors 通过双像素传感器进行弱监督深度估计和图像去毛刺
Pub Date : 2024-09-12 DOI: 10.1109/TPAMI.2024.3458974
Liyuan Pan;Richard Hartley;Liu Liu;Zhiwei Xu;Shah Chowdhury;Yan Yang;Hongguang Zhang;Hongdong Li;Miaomiao Liu
Dual-pixel (DP) imaging sensors are getting more popularly adopted by modern cameras. A DP camera captures a pair of images in a single snapshot by splitting each pixel in half. Several previous studies show how to recover depth information by treating the DP pair as an approximate stereo pair. However, dual-pixel disparity occurs only in image regions with defocus blur which is unlike classic stereo disparity. Heavy defocus blur in DP pairs affects the performance of depth estimation approaches based on matching. Therefore, we treat the blur removal and the depth estimation as a joint problem. We investigate the formation of the DP pair, which links the blur and depth information, rather than blindly removing the blur effect. We propose a mathematical DP model that can improve depth estimation by the blur. This exploration motivated us to propose our previous work, an end-to-end DDDNet (DP-based Depth and Deblur Network), which jointly estimates depth and restores the image in a supervised fashion. However, collecting the ground-truth (GT) depth map for the DP pair is challenging and limits the depth estimation potential of the DP sensor. Therefore, we propose an extension of the DDDNet, called WDDNet (Weakly-supervised Depth and Deblur Network), which includes an efficient reblur solver that does not require GT depth maps for training. To achieve this, we convert all-in-focus images into supervisory signals for unsupervised depth estimation in our WDDNet. We jointly estimate an all-in-focus image and a disparity map, then use a Reblur and Fstack module to regularize the disparity estimation and image restoration. We conducted extensive experiments on synthetic and real data to demonstrate the competitive performance of our method when compared to state-of-the-art (SOTA) supervised approaches.
双像素(DP)成像传感器越来越多地被现代相机所采用。DP 相机通过将每个像素分成两半,在一张快照中捕捉一对图像。之前的一些研究表明了如何通过将 DP 像对视为近似立体像对来恢复深度信息。然而,双像素差异只发生在有散焦模糊的图像区域,这与传统的立体差异不同。DP 对中严重的虚焦模糊会影响基于匹配的深度估计方法的性能。因此,我们将消除模糊和深度估计作为一个联合问题来处理。我们研究了 DP 对的形成,它将模糊和深度信息联系在一起,而不是盲目地消除模糊效应。我们提出了一个数学 DP 模型,该模型可以通过模糊改善深度估计。这一探索促使我们提出了之前的工作成果--端到端 DDDNet(基于 DP 的深度和去模糊网络),它以监督的方式联合估计深度并还原图像。然而,收集 DP 对的地面实况(GT)深度图具有挑战性,限制了 DP 传感器的深度估计潜力。因此,我们提出了 DDDNet 的扩展,称为 WDDNet(弱监督深度和去模糊网络),其中包括一个高效的去模糊求解器,不需要 GT 深度图进行训练。为此,我们在 WDDNet 中将全焦图像转换为无监督深度估计的监督信号。我们联合估计全焦图像和差异图,然后使用 Reblur 和 Fstack 模块对差异估计和图像复原进行正则化。我们在合成数据和真实数据上进行了大量实验,以证明我们的方法与最先进的(SOTA)监督方法相比具有极强的竞争力。
{"title":"Weakly-Supervised Depth Estimation and Image Deblurring via Dual-Pixel Sensors","authors":"Liyuan Pan;Richard Hartley;Liu Liu;Zhiwei Xu;Shah Chowdhury;Yan Yang;Hongguang Zhang;Hongdong Li;Miaomiao Liu","doi":"10.1109/TPAMI.2024.3458974","DOIUrl":"10.1109/TPAMI.2024.3458974","url":null,"abstract":"Dual-pixel (DP) imaging sensors are getting more popularly adopted by modern cameras. A DP camera captures a pair of images in a single snapshot by splitting each pixel in half. Several previous studies show how to recover depth information by treating the DP pair as an approximate stereo pair. However, dual-pixel disparity occurs only in image regions with defocus blur which is unlike classic stereo disparity. Heavy defocus blur in DP pairs affects the performance of depth estimation approaches based on matching. Therefore, we treat the blur removal and the depth estimation as a joint problem. We investigate the formation of the DP pair, which links the blur and depth information, rather than blindly removing the blur effect. We propose a mathematical DP model that can improve depth estimation by the blur. This exploration motivated us to propose our previous work, an end-to-end DDDNet (DP-based Depth and Deblur Network), which jointly estimates depth and restores the image in a supervised fashion. However, collecting the ground-truth (GT) depth map for the DP pair is challenging and limits the depth estimation potential of the DP sensor. Therefore, we propose an extension of the DDDNet, called WDDNet (Weakly-supervised Depth and Deblur Network), which includes an efficient reblur solver that does not require GT depth maps for training. To achieve this, we convert all-in-focus images into supervisory signals for unsupervised depth estimation in our WDDNet. We jointly estimate an all-in-focus image and a disparity map, then use a \u0000<italic>Reblur</i>\u0000 and \u0000<italic>Fstack</i>\u0000 module to regularize the disparity estimation and image restoration. We conducted extensive experiments on synthetic and real data to demonstrate the competitive performance of our method when compared to state-of-the-art (SOTA) supervised approaches.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11314-11330"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label Deconvolution for Node Representation Learning on Large-Scale Attributed Graphs Against Learning Bias 针对学习偏差的大规模归属图节点表征学习的标签解卷积
Pub Date : 2024-09-12 DOI: 10.1109/TPAMI.2024.3459408
Zhihao Shi;Jie Wang;Fanghua Lu;Hanzhu Chen;Defu Lian;Zheng Wang;Jieping Ye;Feng Wu
Node representation learning on attributed graphs—whose nodes are associated with rich attributes (e.g., texts and protein sequences)—plays a crucial role in many important downstream tasks. To encode the attributes and graph structures simultaneously, recent studies integrate pre-trained models with graph neural networks (GNNs), where pre-trained models serve as node encoders (NEs) to encode the attributes. As jointly training large NEs and GNNs on large-scale graphs suffers from severe scalability issues, many methods propose to train NEs and GNNs separately. Consequently, they do not take feature convolutions in GNNs into consideration in the training phase of NEs, leading to a significant learning bias relative to the joint training. To address this challenge, we propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs. The inverse mapping leads to an objective function that is equivalent to that by the joint training, while it can effectively incorporate GNNs in the training phase of NEs against the learning bias. More importantly, we show that LD converges to the optimal objective function values by the joint training under mild assumptions. Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph Benchmark datasets.
归属图的节点表示学习--其节点与丰富的属性(如文本和蛋白质序列)相关--在许多重要的下游任务中发挥着至关重要的作用。为了同时对属性和图结构进行编码,最近的研究将预训练模型与图神经网络(GNN)结合起来,其中预训练模型作为节点编码器(NE)对属性进行编码。由于在大规模图上联合训练大型节点编码器和图神经网络存在严重的可扩展性问题,因此许多方法建议分别训练节点编码器和图神经网络。因此,在 NEs 的训练阶段,它们没有考虑到 GNN 中的特征卷积,从而导致相对于联合训练的显著学习偏差。为了应对这一挑战,我们提出了一种高效的标签正则化技术,即标签解卷积(LD),通过对 GNNs 的反映射进行新颖且高度可扩展的近似来减轻学习偏差。反映射导致的目标函数等同于联合训练的目标函数,同时它能有效地将 GNNs 纳入 NEs 的训练阶段,消除学习偏差。更重要的是,我们证明了在温和的假设条件下,LD 可以通过联合训练收敛到最佳目标函数值。实验证明,在开放图基准数据集上,LD 的性能明显优于最先进的方法。
{"title":"Label Deconvolution for Node Representation Learning on Large-Scale Attributed Graphs Against Learning Bias","authors":"Zhihao Shi;Jie Wang;Fanghua Lu;Hanzhu Chen;Defu Lian;Zheng Wang;Jieping Ye;Feng Wu","doi":"10.1109/TPAMI.2024.3459408","DOIUrl":"10.1109/TPAMI.2024.3459408","url":null,"abstract":"Node representation learning on attributed graphs—whose nodes are associated with rich attributes (e.g., texts and protein sequences)—plays a crucial role in many important downstream tasks. To encode the attributes and graph structures simultaneously, recent studies integrate pre-trained models with graph neural networks (GNNs), where pre-trained models serve as node encoders (NEs) to encode the attributes. As jointly training large NEs and GNNs on large-scale graphs suffers from severe scalability issues, many methods propose to train NEs and GNNs separately. Consequently, they do not take feature convolutions in GNNs into consideration in the training phase of NEs, leading to a significant learning bias relative to the joint training. To address this challenge, we propose an efficient label regularization technique, namely \u0000<bold>L</b>\u0000abel \u0000<bold>D</b>\u0000econvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs. The inverse mapping leads to an objective function that is equivalent to that by the joint training, while it can effectively incorporate GNNs in the training phase of NEs against the learning bias. More importantly, we show that LD converges to the optimal objective function values by the joint training under mild assumptions. Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph Benchmark datasets.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11273-11286"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning From Human Attention for Attribute-Assisted Visual Recognition 学习人类注意力,实现属性辅助视觉识别
Pub Date : 2024-09-11 DOI: 10.1109/TPAMI.2024.3458921
Xiao Bai;Pengcheng Zhang;Xiaohan Yu;Jin Zheng;Edwin R. Hancock;Jun Zhou;Lin Gu
With prior knowledge of seen objects, humans have a remarkable ability to recognize novel objects using shared and distinct local attributes. This is significant for the challenging tasks of zero-shot learning (ZSL) and fine-grained visual classification (FGVC), where the discriminative attributes of objects have played an important role. Inspired by human visual attention, neural networks have widely exploited the attention mechanism to learn the locally discriminative attributes for challenging tasks. Though greatly promoted the development of these fields, existing works mainly focus on learning the region embeddings of different attribute features and neglect the importance of discriminative attribute localization. It is also unclear whether the learned attention truly matches the real human attention. To tackle this problem, this paper proposes to employ real human gaze data for visual recognition networks to learn from human attention. Specifically, we design a unified Attribute Attention Network (A$^{2}$Net) that learns from human attention for both ZSL and FGVC tasks. The overall model consists of an attribute attention branch and a baseline classification network. On top of the image feature maps provided by the baseline classification network, the attribute attention branch employs attribute prototypes to produce attribute attention maps and attribute features. The attribute attention maps are converted to gaze-like attentions to be aligned with real human gaze attention. To guarantee the effectiveness of attribute feature learning, we further align the extracted attribute features with attribute-defined class embeddings. To facilitate learning from human gaze attention for the visual recognition problems, we design a bird classification game to collect real human gaze data using the CUB dataset via an eye-tracker device. Experiments on ZSL and FGVC tasks without/with real human gaze data validate the benefits and accuracy of our proposed model. This work supports the promising benefits of collecting human gaze datasets and automatic gaze estimation algorithms learning from human attention for high-level computer vision tasks.
有了关于已见物体的先验知识,人类利用共享和独特的局部属性识别新物体的能力非同一般。这对于零镜头学习(ZSL)和细粒度视觉分类(FGVC)等具有挑战性的任务来说意义重大,因为在这些任务中,物体的判别属性发挥了重要作用。受人类视觉注意力的启发,神经网络广泛利用注意力机制来学习具有挑战性任务的局部判别属性。现有研究虽然极大地推动了这些领域的发展,但主要集中于学习不同属性特征的区域嵌入,而忽视了辨别属性定位的重要性。此外,学习到的注意力是否真正符合真实的人类注意力也不清楚。为了解决这个问题,本文提出利用真实的人类注视数据,让视觉识别网络从人类注意力中学习。具体来说,我们设计了一个统一的属性注意力网络(A$^{2}$Net),它可以在 ZSL 和 FGVC 任务中学习人类注意力。整个模型由一个属性注意分支和一个基线分类网络组成。在基线分类网络提供的图像特征图基础上,属性注意分支利用属性原型生成属性注意图和属性特征。属性注意力图被转换为类似凝视的注意力,以便与真实的人类凝视注意力保持一致。为了保证属性特征学习的有效性,我们进一步将提取的属性特征与属性定义的类嵌入对齐。为了便于从人类凝视注意力中学习视觉识别问题,我们设计了一个鸟类分类游戏,通过眼球追踪设备使用 CUB 数据集收集真实的人类凝视数据。在没有/有真实人类注视数据的 ZSL 和 FGVC 任务上进行的实验验证了我们提出的模型的优势和准确性。这项工作证明了收集人类注视数据集和从人类注意力中学习自动注视估计算法对高级计算机视觉任务的巨大好处。
{"title":"Learning From Human Attention for Attribute-Assisted Visual Recognition","authors":"Xiao Bai;Pengcheng Zhang;Xiaohan Yu;Jin Zheng;Edwin R. Hancock;Jun Zhou;Lin Gu","doi":"10.1109/TPAMI.2024.3458921","DOIUrl":"10.1109/TPAMI.2024.3458921","url":null,"abstract":"With prior knowledge of seen objects, humans have a remarkable ability to recognize novel objects using shared and distinct local attributes. This is significant for the challenging tasks of zero-shot learning (ZSL) and fine-grained visual classification (FGVC), where the discriminative attributes of objects have played an important role. Inspired by human visual attention, neural networks have widely exploited the attention mechanism to learn the locally discriminative attributes for challenging tasks. Though greatly promoted the development of these fields, existing works mainly focus on learning the region embeddings of different attribute features and neglect the importance of discriminative attribute localization. It is also unclear whether the learned attention truly matches the real human attention. To tackle this problem, this paper proposes to employ real human gaze data for visual recognition networks to learn from human attention. Specifically, we design a unified Attribute Attention Network (A\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000Net) that learns from human attention for both ZSL and FGVC tasks. The overall model consists of an attribute attention branch and a baseline classification network. On top of the image feature maps provided by the baseline classification network, the attribute attention branch employs attribute prototypes to produce attribute attention maps and attribute features. The attribute attention maps are converted to gaze-like attentions to be aligned with real human gaze attention. To guarantee the effectiveness of attribute feature learning, we further align the extracted attribute features with attribute-defined class embeddings. To facilitate learning from human gaze attention for the visual recognition problems, we design a bird classification game to collect real human gaze data using the CUB dataset via an eye-tracker device. Experiments on ZSL and FGVC tasks without/with real human gaze data validate the benefits and accuracy of our proposed model. This work supports the promising benefits of collecting human gaze datasets and automatic gaze estimation algorithms learning from human attention for high-level computer vision tasks.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11152-11167"},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Safe Reinforcement Learning: Methods, Theories, and Applications 安全强化学习回顾:方法、理论和应用
Pub Date : 2024-09-10 DOI: 10.1109/TPAMI.2024.3457538
Shangding Gu;Long Yang;Yali Du;Guang Chen;Florian Walter;Jun Wang;Alois Knoll
Reinforcement Learning (RL) has achieved tremendous success in many complex decision-making tasks. However, safety concerns are raised during deploying RL in real-world applications, leading to a growing demand for safe RL algorithms, such as in autonomous driving and robotics scenarios. While safe control has a long history, the study of safe RL algorithms is still in the early stages. To establish a good foundation for future safe RL research, in this paper, we provide a review of safe RL from the perspectives of methods, theories, and applications. First, we review the progress of safe RL from five dimensions and come up with five crucial problems for safe RL being deployed in real-world applications, coined as “2H3W”. Second, we analyze the algorithm and theory progress from the perspectives of answering the “2H3W” problems. Particularly, the sample complexity of safe RL algorithms is reviewed and discussed, followed by an introduction to the applications and benchmarks of safe RL algorithms. Finally, we open the discussion of the challenging problems in safe RL, hoping to inspire future research on this thread. To advance the study of safe RL algorithms, we release an open-sourced repository containing major safe RL algorithms at the link.
强化学习(RL)在许多复杂的决策任务中取得了巨大成功。然而,在实际应用中部署 RL 时,人们对安全问题提出了担忧,导致对安全 RL 算法的需求日益增长,例如在自动驾驶和机器人应用场景中。虽然安全控制由来已久,但安全 RL 算法的研究仍处于早期阶段。为了给未来的安全 RL 研究打下良好的基础,本文将从方法、理论和应用的角度对安全 RL 进行综述。首先,我们从五个维度回顾了安全 RL 的研究进展,并提出了安全 RL 在实际应用中的五个关键问题,即 "2H3W"。其次,我们从回答 "2H3W "问题的角度分析了算法和理论的进展。特别是回顾和讨论了安全 RL 算法的样本复杂度,随后介绍了安全 RL 算法的应用和基准。最后,我们就安全 RL 中具有挑战性的问题展开讨论,希望能对未来的研究有所启发。为了推动安全 RL 算法的研究,我们发布了一个包含主要安全 RL 算法的开源资源库,链接如下。
{"title":"A Review of Safe Reinforcement Learning: Methods, Theories, and Applications","authors":"Shangding Gu;Long Yang;Yali Du;Guang Chen;Florian Walter;Jun Wang;Alois Knoll","doi":"10.1109/TPAMI.2024.3457538","DOIUrl":"10.1109/TPAMI.2024.3457538","url":null,"abstract":"Reinforcement Learning (RL) has achieved tremendous success in many complex decision-making tasks. However, safety concerns are raised during deploying RL in real-world applications, leading to a growing demand for safe RL algorithms, such as in autonomous driving and robotics scenarios. While safe control has a long history, the study of safe RL algorithms is still in the early stages. To establish a good foundation for future safe RL research, in this paper, we provide a review of safe RL from the perspectives of methods, theories, and applications. First, we review the progress of safe RL from five dimensions and come up with five crucial problems for safe RL being deployed in real-world applications, coined as \u0000<italic>“2H3W”</i>\u0000. Second, we analyze the algorithm and theory progress from the perspectives of answering the \u0000<italic>“2H3W”</i>\u0000 problems. Particularly, the sample complexity of safe RL algorithms is reviewed and discussed, followed by an introduction to the applications and benchmarks of safe RL algorithms. Finally, we open the discussion of the challenging problems in safe RL, hoping to inspire future research on this thread. To advance the study of safe RL algorithms, we release an open-sourced repository containing major safe RL algorithms at the link.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11216-11235"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142166413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refining 3D Human Texture Estimation From a Single Image 从单张图像完善三维人体纹理估算
Pub Date : 2024-09-10 DOI: 10.1109/TPAMI.2024.3456817
Said Fahri Altindis;Adil Meric;Yusuf Dalva;Uğur Güdükbay;Aysegul Dundar
Estimating 3D human texture from a single image is essential in graphics and vision. It requires learning a mapping function from input images of humans with diverse poses into the parametric (uv) space and reasonably hallucinating invisible parts. To achieve a high-quality 3D human texture estimation, we propose a framework that adaptively samples the input by a deformable convolution where offsets are learned via a deep neural network. Additionally, we describe a novel cycle consistency loss that improves view generalization. We further propose to train our framework with an uncertainty-based pixel-level image reconstruction loss, which enhances color fidelity. We compare our method against the state-of-the-art approaches and show significant qualitative and quantitative improvements.
从单张图像估算三维人体纹理在图形学和视觉中至关重要。这需要从不同姿势的人体输入图像中学习一个映射函数,将其映射到参数(uv)空间,并合理地幻化出不可见的部分。为了实现高质量的三维人体纹理估算,我们提出了一个框架,通过可变形卷积对输入图像进行自适应采样,并通过深度神经网络学习偏移量。此外,我们还描述了一种新颖的周期一致性损失,它能提高视图泛化能力。我们还建议使用基于不确定性的像素级图像重建损失来训练我们的框架,从而提高色彩保真度。我们将我们的方法与最先进的方法进行了比较,结果表明我们的方法在质量和数量上都有显著提高。
{"title":"Refining 3D Human Texture Estimation From a Single Image","authors":"Said Fahri Altindis;Adil Meric;Yusuf Dalva;Uğur Güdükbay;Aysegul Dundar","doi":"10.1109/TPAMI.2024.3456817","DOIUrl":"10.1109/TPAMI.2024.3456817","url":null,"abstract":"Estimating 3D human texture from a single image is essential in graphics and vision. It requires learning a mapping function from input images of humans with diverse poses into the parametric (\u0000<italic>uv</i>\u0000) space and reasonably hallucinating invisible parts. To achieve a high-quality 3D human texture estimation, we propose a framework that adaptively samples the input by a deformable convolution where offsets are learned via a deep neural network. Additionally, we describe a novel cycle consistency loss that improves view generalization. We further propose to train our framework with an uncertainty-based pixel-level image reconstruction loss, which enhances color fidelity. We compare our method against the state-of-the-art approaches and show significant qualitative and quantitative improvements.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11464-11475"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142166414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Single Image Defocus Deblurring via Gaussian Kernel Mixture Learning 通过高斯核混杂学习实现深度单图像去焦模糊
Pub Date : 2024-09-10 DOI: 10.1109/TPAMI.2024.3457856
Yuhui Quan;Zicong Wu;Ruotao Xu;Hui Ji
This paper proposes an end-to-end deep learning approach for removing defocus blur from a single defocused image. Defocus blur is a common issue in digital photography that poses a challenge due to its spatially-varying and large blurring effect. The proposed approach addresses this challenge by employing a pixel-wise Gaussian kernel mixture (GKM) model to accurately yet compactly parameterize spatially-varying defocus point spread functions (PSFs), which is motivated by the isotropy in defocus PSFs. We further propose a grouped GKM (GGKM) model that decouples the coefficients in GKM, so as to improve the modeling accuracy with an economic manner. Afterward, a deep neural network called GGKMNet is then developed by unrolling a fixed-point iteration process of GGKM-based image deblurring, which avoids the efficiency issues in existing unrolling DNNs. Using a lightweight scale-recurrent architecture with a coarse-to-fine estimation scheme to predict the coefficients in GGKM, the GGKMNet can efficiently recover an all-in-focus image from a defocused one. Such advantages are demonstrated with extensive experiments on five benchmark datasets, where the GGKMNet outperforms existing defocus deblurring methods in restoration quality, as well as showing advantages in terms of model complexity and computational efficiency.
本文提出了一种端到端的深度学习方法,用于消除单张失焦图像中的失焦模糊。散焦模糊是数码摄影中的一个常见问题,由于其空间变化和巨大的模糊效应而构成挑战。为了解决这一难题,我们提出了一种像素级高斯核混合物(GKM)模型,以精确而紧凑的方式为空间变化的离焦点扩散函数(PSF)提供参数。我们进一步提出了分组 GKM(GGKM)模型,将 GKM 中的系数解耦,从而以经济的方式提高建模精度。随后,我们通过对基于 GGKM 的图像去模糊进行定点迭代,开发出一种名为 GGKMNet 的深度神经网络,避免了现有解卷 DNN 的效率问题。GGKMNet 采用轻量级规模递归架构,并采用从粗到细的估算方案来预测 GGKM 中的系数,因此能有效地从失焦图像中恢复出全焦图像。我们在五个基准数据集上进行了大量实验,证明了 GGKMNet 的这些优势,它在恢复质量方面优于现有的去焦方法,同时在模型复杂度和计算效率方面也表现出优势。
{"title":"Deep Single Image Defocus Deblurring via Gaussian Kernel Mixture Learning","authors":"Yuhui Quan;Zicong Wu;Ruotao Xu;Hui Ji","doi":"10.1109/TPAMI.2024.3457856","DOIUrl":"10.1109/TPAMI.2024.3457856","url":null,"abstract":"This paper proposes an end-to-end deep learning approach for removing defocus blur from a single defocused image. Defocus blur is a common issue in digital photography that poses a challenge due to its spatially-varying and large blurring effect. The proposed approach addresses this challenge by employing a pixel-wise Gaussian kernel mixture (GKM) model to accurately yet compactly parameterize spatially-varying defocus point spread functions (PSFs), which is motivated by the isotropy in defocus PSFs. We further propose a grouped GKM (GGKM) model that decouples the coefficients in GKM, so as to improve the modeling accuracy with an economic manner. Afterward, a deep neural network called GGKMNet is then developed by unrolling a fixed-point iteration process of GGKM-based image deblurring, which avoids the efficiency issues in existing unrolling DNNs. Using a lightweight scale-recurrent architecture with a coarse-to-fine estimation scheme to predict the coefficients in GGKM, the GGKMNet can efficiently recover an all-in-focus image from a defocused one. Such advantages are demonstrated with extensive experiments on five benchmark datasets, where the GGKMNet outperforms existing defocus deblurring methods in restoration quality, as well as showing advantages in terms of model complexity and computational efficiency.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11361-11377"},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142166409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on pattern analysis and machine intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1