首页 > 最新文献

IEEE Transactions on Pattern Analysis and Machine Intelligence最新文献

英文 中文
GAN-based Facial Attribute Manipulation 基于GAN的人脸属性操作
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-10-23 DOI: 10.48550/arXiv.2210.12683
Yunfan Liu, Qi Li, Qiyao Deng, Zhen Sun, Mingcong Yang
Facial Attribute Manipulation (FAM) aims to aesthetically modify a given face image to render desired attributes, which has received significant attention due to its broad practical applications ranging from digital entertainment to biometric forensics. In the last decade, with the remarkable success of Generative Adversarial Networks (GANs) in synthesizing realistic images, numerous GAN-based models have been proposed to solve FAM with various problem formulation approaches and guiding information representations. This paper presents a comprehensive survey of GAN-based FAM methods with a focus on summarizing their principal motivations and technical details. The main contents of this survey include: (i) an introduction to the research background and basic concepts related to FAM, (ii) a systematic review of GAN-based FAM methods in three main categories, and (iii) an in-depth discussion of important properties of FAM methods, open issues, and future research directions. This survey not only builds a good starting point for researchers new to this field but also serves as a reference for the vision community.
面部属性操纵(FAM)旨在对给定的人脸图像进行美学修改,以呈现所需的属性,由于其广泛的实际应用,从数字娱乐到生物识别取证,该技术受到了极大的关注。在过去的十年里,随着生成对抗性网络(GAN)在合成真实图像方面取得的显著成功,已经提出了许多基于GAN的模型来解决FAM,这些模型具有各种问题公式化方法和指导信息表示。本文对基于GAN的FAM方法进行了全面的调查,重点总结了它们的主要动机和技术细节。本次调查的主要内容包括:(i)介绍FAM的研究背景和基本概念,(ii)系统回顾了三大类基于GAN的FAM方法,以及(iii)深入讨论了FAM方法的重要性质、悬而未决的问题和未来的研究方向。这项调查不仅为新进入该领域的研究人员建立了一个良好的起点,也为视觉界提供了参考。
{"title":"GAN-based Facial Attribute Manipulation","authors":"Yunfan Liu, Qi Li, Qiyao Deng, Zhen Sun, Mingcong Yang","doi":"10.48550/arXiv.2210.12683","DOIUrl":"https://doi.org/10.48550/arXiv.2210.12683","url":null,"abstract":"Facial Attribute Manipulation (FAM) aims to aesthetically modify a given face image to render desired attributes, which has received significant attention due to its broad practical applications ranging from digital entertainment to biometric forensics. In the last decade, with the remarkable success of Generative Adversarial Networks (GANs) in synthesizing realistic images, numerous GAN-based models have been proposed to solve FAM with various problem formulation approaches and guiding information representations. This paper presents a comprehensive survey of GAN-based FAM methods with a focus on summarizing their principal motivations and technical details. The main contents of this survey include: (i) an introduction to the research background and basic concepts related to FAM, (ii) a systematic review of GAN-based FAM methods in three main categories, and (iii) an in-depth discussion of important properties of FAM methods, open issues, and future research directions. This survey not only builds a good starting point for researchers new to this field but also serves as a reference for the vision community.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45455003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
QDTrack: Quasi-Dense Similarity Learning for Appearance-Only Multiple Object Tracking QDTrack:用于仅外观多目标跟踪的准密集相似性学习
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-10-12 DOI: 10.48550/arXiv.2210.06984
Tobias Fischer, Jiangmiao Pang, Thomas E. Huang, Linlu Qiu, Haofeng Chen, Trevor Darrell, F. Yu
Similarity learning has been recognized as a crucial step for object tracking. However, existing multiple object tracking methods only use sparse ground truth matching as the training objective, while ignoring the majority of the informative regions in images. In this paper, we present Quasi-Dense Similarity Learning, which densely samples hundreds of object regions on a pair of images for contrastive learning. We combine this similarity learning with multiple existing object detectors to build Quasi-Dense Tracking (QDTrack), which does not require displacement regression or motion priors. We find that the resulting distinctive feature space admits a simple nearest neighbor search at inference time for object association. In addition, we show that our similarity learning scheme is not limited to video data, but can learn effective instance similarity even from static input, enabling a competitive tracking performance without training on videos or using tracking supervision. We conduct extensive experiments on a wide variety of popular MOT benchmarks. We find that, despite its simplicity, QDTrack rivals the performance of state-of-the-art tracking methods on all benchmarks and sets a new state-of-the-art on the large-scale BDD100K MOT benchmark, while introducing negligible computational overhead to the detector.
相似性学习已被公认为目标跟踪的关键步骤。然而,现有的多目标跟踪方法只使用稀疏的地面实况匹配作为训练目标,而忽略了图像中的大部分信息区域。在本文中,我们提出了准密集相似性学习,它对一对图像上的数百个对象区域进行密集采样以进行对比学习。我们将这种相似性学习与多个现有的对象检测器相结合,构建了不需要位移回归或运动先验的准密集跟踪(QDTrack)。我们发现,由此产生的区别特征空间允许在推理时进行简单的最近邻搜索以进行对象关联。此外,我们还表明,我们的相似性学习方案不仅限于视频数据,甚至可以从静态输入中学习有效的实例相似性,从而在没有视频训练或使用跟踪监督的情况下实现有竞争力的跟踪性能。我们在各种流行的MOT基准上进行了广泛的实验。我们发现,尽管QDTrack很简单,但它在所有基准上的性能都可以与最先进的跟踪方法相媲美,并在大规模BDD100K MOT基准上建立了一个新的最先进的方法,同时为检测器引入了可忽略不计的计算开销。
{"title":"QDTrack: Quasi-Dense Similarity Learning for Appearance-Only Multiple Object Tracking","authors":"Tobias Fischer, Jiangmiao Pang, Thomas E. Huang, Linlu Qiu, Haofeng Chen, Trevor Darrell, F. Yu","doi":"10.48550/arXiv.2210.06984","DOIUrl":"https://doi.org/10.48550/arXiv.2210.06984","url":null,"abstract":"Similarity learning has been recognized as a crucial step for object tracking. However, existing multiple object tracking methods only use sparse ground truth matching as the training objective, while ignoring the majority of the informative regions in images. In this paper, we present Quasi-Dense Similarity Learning, which densely samples hundreds of object regions on a pair of images for contrastive learning. We combine this similarity learning with multiple existing object detectors to build Quasi-Dense Tracking (QDTrack), which does not require displacement regression or motion priors. We find that the resulting distinctive feature space admits a simple nearest neighbor search at inference time for object association. In addition, we show that our similarity learning scheme is not limited to video data, but can learn effective instance similarity even from static input, enabling a competitive tracking performance without training on videos or using tracking supervision. We conduct extensive experiments on a wide variety of popular MOT benchmarks. We find that, despite its simplicity, QDTrack rivals the performance of state-of-the-art tracking methods on all benchmarks and sets a new state-of-the-art on the large-scale BDD100K MOT benchmark, while introducing negligible computational overhead to the detector.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49367318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Generalized Parametric Contrastive Learning 广义参数对比学习
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-09-26 DOI: 10.48550/arXiv.2209.12400
Jiequan Cui, Zhisheng Zhong, Zhuotao Tian, Shu Liu, Bei Yu, Jiaya Jia
In this paper, we propose the Generalized Parametric Contrastive Learning (GPaCo/PaCo) which works well on both imbalanced and balanced data. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our GPaCo/PaCo loss under a balanced setting. Our analysis demonstrates that GPaCo/PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed benchmarks manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models from CNNs to vision transformers trained with GPaCo loss show better generalization performance and stronger robustness compared with MAE models. Moreover, GPaCo can be applied to semantic segmentation task and obvious improvements are observed on 4 most popular benchmarks. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.
在本文中,我们提出了广义参数对比学习(GPaCo/PaCo),它在不平衡和平衡数据上都能很好地工作。基于理论分析,我们观察到有监督的对比损失倾向于偏向高频类,从而增加了不平衡学习的难度。我们引入了一组参数类可学习中心,以从优化的角度进行再平衡。此外,我们在平衡设置下分析了我们的GPaCo/PaCo损失。我们的分析表明,随着更多的样本与其对应的中心被拉在一起,GPaCo/PaCo可以自适应地提高将同类样本推近的强度,并有利于硬示例学习。在长尾基准上的实验表明了长尾识别的最新技术。在完整的ImageNet上,与MAE模型相比,从CNN到用GPaCo损失训练的视觉转换器的模型显示出更好的泛化性能和更强的鲁棒性。此外,GPaCo可以应用于语义分割任务,并且在4个最流行的基准上观察到了明显的改进。我们的代码可在https://github.com/dvlab-research/Parametric-Contrastive-Learning.
{"title":"Generalized Parametric Contrastive Learning","authors":"Jiequan Cui, Zhisheng Zhong, Zhuotao Tian, Shu Liu, Bei Yu, Jiaya Jia","doi":"10.48550/arXiv.2209.12400","DOIUrl":"https://doi.org/10.48550/arXiv.2209.12400","url":null,"abstract":"In this paper, we propose the Generalized Parametric Contrastive Learning (GPaCo/PaCo) which works well on both imbalanced and balanced data. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our GPaCo/PaCo loss under a balanced setting. Our analysis demonstrates that GPaCo/PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed benchmarks manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models from CNNs to vision transformers trained with GPaCo loss show better generalization performance and stronger robustness compared with MAE models. Moreover, GPaCo can be applied to semantic segmentation task and obvious improvements are observed on 4 most popular benchmarks. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43802099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural Hints StructNeRF:带有结构提示的室内场景的神经辐射场
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-09-12 DOI: 10.48550/arXiv.2209.05277
Zhengjie Chen, Chen Wang, Yuanchen Guo, Song-Hai Zhang
Neural Radiance Fields (NeRF) achieve photo-realistic view synthesis with densely captured input images. However, the geometry of NeRF is extremely under-constrained given sparse views, resulting in significant degradation of novel view synthesis quality. Inspired by self-supervised depth estimation methods, we propose StructNeRF, a solution to novel view synthesis for indoor scenes with sparse inputs. StructNeRF leverages the structural hints naturally embedded in multi-view inputs to handle the unconstrained geometry issue in NeRF. Specifically, it tackles the texture and non-texture regions respectively: a patch-based multi-view consistent photometric loss is proposed to constrain the geometry of textured regions; for non-textured ones, we explicitly restrict them to be 3D consistent planes. Through the dense self-supervised depth constraints, our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data. Extensive experiments on several real-world datasets demonstrate that StructNeRF shows superior or comparable performance compared to state-of-the-art methods (e.g. NeRF, DSNeRF, RegNeRF, Dense Depth Priors, MonoSDF, etc.) for indoor scenes with sparse inputs both quantitatively and qualitatively.
神经辐射场(NeRF)利用密集捕捉的输入图像实现照片逼真的视图合成。然而,在给定稀疏视图的情况下,NeRF的几何结构极受约束,导致新视图合成质量显著下降。受自监督深度估计方法的启发,我们提出了StructNeRF,这是一种用于具有稀疏输入的室内场景的新型视图合成解决方案。StructNeRF利用多视图输入中自然嵌入的结构提示来处理NeRF中的无约束几何问题。具体来说,它分别处理了纹理和非纹理区域:提出了一种基于补丁的多视图一致光度损失来约束纹理区域的几何;对于非纹理平面,我们明确地将它们限制为三维一致平面。通过密集的自监督深度约束,我们的方法在没有对外部数据进行任何额外训练的情况下提高了NeRF的几何和视图合成性能。在几个真实世界数据集上进行的大量实验表明,与最先进的方法(例如NeRF、DSNeRF、RegNeRF、Dense Depth Priors、MonoSDF等)相比,对于具有稀疏输入的室内场景,StructNeRF在数量和质量上都表现出优越或可比的性能。
{"title":"StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural Hints","authors":"Zhengjie Chen, Chen Wang, Yuanchen Guo, Song-Hai Zhang","doi":"10.48550/arXiv.2209.05277","DOIUrl":"https://doi.org/10.48550/arXiv.2209.05277","url":null,"abstract":"Neural Radiance Fields (NeRF) achieve photo-realistic view synthesis with densely captured input images. However, the geometry of NeRF is extremely under-constrained given sparse views, resulting in significant degradation of novel view synthesis quality. Inspired by self-supervised depth estimation methods, we propose StructNeRF, a solution to novel view synthesis for indoor scenes with sparse inputs. StructNeRF leverages the structural hints naturally embedded in multi-view inputs to handle the unconstrained geometry issue in NeRF. Specifically, it tackles the texture and non-texture regions respectively: a patch-based multi-view consistent photometric loss is proposed to constrain the geometry of textured regions; for non-textured ones, we explicitly restrict them to be 3D consistent planes. Through the dense self-supervised depth constraints, our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data. Extensive experiments on several real-world datasets demonstrate that StructNeRF shows superior or comparable performance compared to state-of-the-art methods (e.g. NeRF, DSNeRF, RegNeRF, Dense Depth Priors, MonoSDF, etc.) for indoor scenes with sparse inputs both quantitatively and qualitatively.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43771178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Learning Canonical Embeddings for Unsupervised Shape Correspondence with Locally Linear Transformations 具有局部线性变换的无监督形状对应的学习规范嵌入
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-09-05 DOI: 10.48550/arXiv.2209.02152
Pan He, Patrick Emami, Sanjay Ranka, A. Rangarajan
We present a new approach to unsupervised shape correspondence learning between pairs of point clouds. We make the first attempt to adapt the classical locally linear embedding algorithm (LLE)-originally designed for nonlinear dimensionality reduction-for shape correspondence. The key idea is to find dense correspondences between shapes by first obtaining high-dimensional neighborhood-preserving embeddings of low-dimensional point clouds and subsequently aligning the source and target embeddings using locally linear transformations. We demonstrate that learning the embedding using a new LLE-inspired point cloud reconstruction objective results in accurate shape correspondences. More specifically, the approach comprises an end-to-end learnable framework of extracting high-dimensional neighborhood-preserving embeddings, estimating locally linear transformations in the embedding space, and reconstructing shapes via divergence measure-based alignment of probability density functions built over reconstructed and target shapes. Our approach enforces embeddings of shapes in correspondence to lie in the same universal/canonical embedding space, which eventually helps regularize the learning process and leads to a simple nearest neighbors approach between shape embeddings for finding reliable correspondences. Comprehensive experiments show that the new method makes noticeable improvements over state-of-the-art approaches on standard shape correspondence benchmark datasets covering both human and nonhuman shapes.
我们提出了一种在点云对之间进行无监督形状对应学习的新方法。我们首次尝试采用经典的局部线性嵌入算法(LLE),该算法最初是为形状对应的非线性降维而设计的。关键思想是通过首先获得低维点云的高维邻域保持嵌入,然后使用局部线性变换对齐源嵌入和目标嵌入,来找到形状之间的密集对应关系。我们证明,使用新的LLE启发的点云重建目标学习嵌入可以产生精确的形状对应。更具体地说,该方法包括一个端到端的可学习框架,用于提取高维邻域保持嵌入,估计嵌入空间中的局部线性变换,以及通过在重建形状和目标形状上建立的概率密度函数的基于散度测度的对齐来重建形状。我们的方法强制形状在对应关系中的嵌入位于相同的通用/规范嵌入空间中,这最终有助于规范学习过程,并导致形状嵌入之间的简单近邻方法,以找到可靠的对应关系。综合实验表明,在覆盖人类和非人类形状的标准形状对应基准数据集上,新方法比最先进的方法有了显著的改进。
{"title":"Learning Canonical Embeddings for Unsupervised Shape Correspondence with Locally Linear Transformations","authors":"Pan He, Patrick Emami, Sanjay Ranka, A. Rangarajan","doi":"10.48550/arXiv.2209.02152","DOIUrl":"https://doi.org/10.48550/arXiv.2209.02152","url":null,"abstract":"We present a new approach to unsupervised shape correspondence learning between pairs of point clouds. We make the first attempt to adapt the classical locally linear embedding algorithm (LLE)-originally designed for nonlinear dimensionality reduction-for shape correspondence. The key idea is to find dense correspondences between shapes by first obtaining high-dimensional neighborhood-preserving embeddings of low-dimensional point clouds and subsequently aligning the source and target embeddings using locally linear transformations. We demonstrate that learning the embedding using a new LLE-inspired point cloud reconstruction objective results in accurate shape correspondences. More specifically, the approach comprises an end-to-end learnable framework of extracting high-dimensional neighborhood-preserving embeddings, estimating locally linear transformations in the embedding space, and reconstructing shapes via divergence measure-based alignment of probability density functions built over reconstructed and target shapes. Our approach enforces embeddings of shapes in correspondence to lie in the same universal/canonical embedding space, which eventually helps regularize the learning process and leads to a simple nearest neighbors approach between shape embeddings for finding reliable correspondences. Comprehensive experiments show that the new method makes noticeable improvements over state-of-the-art approaches on standard shape correspondence benchmark datasets covering both human and nonhuman shapes.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49216418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TokenCut: Segmenting Objects in Images and Videos with Self-supervised Transformer and Normalized Cut TokenCut:用自监督变换器和归一化剪切分割图像和视频中的对象
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-09-01 DOI: 10.48550/arXiv.2209.00383
Yangtao Wang, Xiaoke Shen, Yuan Yuan, Yuming Du, Maomao Li, S. Hu, J. Crowley, D. Vaufreydaz
In this paper, we describe a graph-based algorithm that uses the features obtained by a self-supervised transformer to detect and segment salient objects in images and videos. With this approach, the image patches that compose an image or video are organised into a fully connected graph, in which the edge between each pair of patches is labeled with a similarity score based on the features learned by the transformer. Detection and segmentation of salient objects can then be formulated as a graph-cut problem and solved using the classical Normalized Cut algorithm. Despite the simplicity of this approach, it achieves state-of-the-art results on several common image and video detection and segmentation tasks. For unsupervised object discovery, this approach outperforms the competing approaches by a margin of 6.1%, 5.7%, and 2.6% when tested with the VOC07, VOC12, and COCO20K datasets. For the unsupervised saliency detection task in images, this method improves the score for Intersection over Union (IoU) by 4.4%, 5.6% and 5.2%. When tested with the ECSSD, DUTS, and DUT-OMRON datasets. This method also achieves competitive results for unsupervised video object segmentation tasks with the DAVIS, SegTV2, and FBMS datasets. Our implementation is available at https://www.m-psi.fr/Papers/TokenCut2022/.
在本文中,我们描述了一种基于图的算法,该算法利用自监督变压器获得的特征来检测和分割图像和视频中的显著目标。通过这种方法,组成图像或视频的图像块被组织成一个完全连接的图,其中每对图像块之间的边缘被标记为基于变压器学习到的特征的相似性分数。然后,显著目标的检测和分割可以被表述为一个图切问题,并使用经典的归一化切算法来解决。尽管这种方法很简单,但它在几个常见的图像和视频检测和分割任务上取得了最先进的结果。对于无监督对象发现,当使用VOC07、VOC12和COCO20K数据集进行测试时,该方法的性能比竞争对手的方法高出6.1%、5.7%和2.6%。对于图像中的无监督显著性检测任务,该方法将IoU (Intersection over Union)的得分分别提高了4.4%、5.6%和5.2%。当使用ECSSD, DUTS和DUT-OMRON数据集进行测试时。该方法在DAVIS、SegTV2和FBMS数据集的无监督视频对象分割任务中也取得了具有竞争力的结果。我们的实现可以在https://www.m-psi.fr/Papers/TokenCut2022/上获得。
{"title":"TokenCut: Segmenting Objects in Images and Videos with Self-supervised Transformer and Normalized Cut","authors":"Yangtao Wang, Xiaoke Shen, Yuan Yuan, Yuming Du, Maomao Li, S. Hu, J. Crowley, D. Vaufreydaz","doi":"10.48550/arXiv.2209.00383","DOIUrl":"https://doi.org/10.48550/arXiv.2209.00383","url":null,"abstract":"In this paper, we describe a graph-based algorithm that uses the features obtained by a self-supervised transformer to detect and segment salient objects in images and videos. With this approach, the image patches that compose an image or video are organised into a fully connected graph, in which the edge between each pair of patches is labeled with a similarity score based on the features learned by the transformer. Detection and segmentation of salient objects can then be formulated as a graph-cut problem and solved using the classical Normalized Cut algorithm. Despite the simplicity of this approach, it achieves state-of-the-art results on several common image and video detection and segmentation tasks. For unsupervised object discovery, this approach outperforms the competing approaches by a margin of 6.1%, 5.7%, and 2.6% when tested with the VOC07, VOC12, and COCO20K datasets. For the unsupervised saliency detection task in images, this method improves the score for Intersection over Union (IoU) by 4.4%, 5.6% and 5.2%. When tested with the ECSSD, DUTS, and DUT-OMRON datasets. This method also achieves competitive results for unsupervised video object segmentation tasks with the DAVIS, SegTV2, and FBMS datasets. Our implementation is available at https://www.m-psi.fr/Papers/TokenCut2022/.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47943661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
PS 2 F: Polarized Spiral Point Spread Function for Single-Shot 3D Sensing. PS 2 F:用于单镜头三维传感的偏振螺旋点展宽函数。
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-08-29 DOI: 10.1109/TPAMI.2022.3202511
Bhargav Ghanekar, Vishwanath Saragadam, Dushyant Mehra, Anna-Karin Gustavsson, Aswin C Sankaranarayanan, Ashok Veeraraghavan

We propose a compact snapshot monocular depth estimation technique that relies on an engineered point spread function (PSF). Traditional approaches used in microscopic super-resolution imaging such as the Double-Helix PSF (DHPSF) are ill-suited for scenes that are more complex than a sparse set of point light sources. We show, using the Cramér-Rao lower bound, that separating the two lobes of the DHPSF and thereby capturing two separate images leads to a dramatic increase in depth accuracy. A special property of the phase mask used for generating the DHPSF is that a separation of the phase mask into two halves leads to a spatial separation of the two lobes. We leverage this property to build a compact polarization-based optical setup, where we place two orthogonal linear polarizers on each half of the DHPSF phase mask and then capture the resulting image with a polarization-sensitive camera. Results from simulations and a lab prototype demonstrate that our technique achieves up to 50% lower depth error compared to state-of-the-art designs including the DHPSF and the Tetrapod PSF, with little to no loss in spatial resolution.

我们提出了一种依赖于工程点扩散函数(PSF)的紧凑型快照单目深度估计技术。用于显微镜超分辨率成像的传统方法,如双像素 PSF(DHPSF),并不适合比点光源稀疏集更复杂的场景。我们利用克拉梅尔-拉奥下界(Cramér-Rao lower bound)证明,分离 DHPSF 的两个叶片,从而捕捉两个独立的图像,可显著提高深度精度。用于生成 DHPSF 的相位掩模的一个特殊属性是,将相位掩模分成两半会导致两个裂片的空间分离。我们利用这一特性建立了一个基于偏振的紧凑型光学装置,将两个正交线性偏振器分别置于 DHPSF 相位掩模的两半上,然后用偏振敏感相机捕捉生成的图像。模拟和实验室原型的结果表明,与包括 DHPSF 和 Tetrapod PSF 在内的最先进设计相比,我们的技术最多可将深度误差降低 50%,而空间分辨率几乎没有损失。
{"title":"PS <sup>2</sup> F: Polarized Spiral Point Spread Function for Single-Shot 3D Sensing.","authors":"Bhargav Ghanekar, Vishwanath Saragadam, Dushyant Mehra, Anna-Karin Gustavsson, Aswin C Sankaranarayanan, Ashok Veeraraghavan","doi":"10.1109/TPAMI.2022.3202511","DOIUrl":"10.1109/TPAMI.2022.3202511","url":null,"abstract":"<p><p>We propose a compact snapshot monocular depth estimation technique that relies on an engineered point spread function (PSF). Traditional approaches used in microscopic super-resolution imaging such as the Double-Helix PSF (DHPSF) are ill-suited for scenes that are more complex than a sparse set of point light sources. We show, using the Cramér-Rao lower bound, that separating the two lobes of the DHPSF and thereby capturing two separate images leads to a dramatic increase in depth accuracy. A special property of the phase mask used for generating the DHPSF is that a separation of the phase mask into two halves leads to a spatial separation of the two lobes. We leverage this property to build a compact polarization-based optical setup, where we place two orthogonal linear polarizers on each half of the DHPSF phase mask and then capture the resulting image with a polarization-sensitive camera. Results from simulations and a lab prototype demonstrate that our technique achieves up to 50% lower depth error compared to state-of-the-art designs including the DHPSF and the Tetrapod PSF, with little to no loss in spatial resolution.</p>","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"PP ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9607084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MCTS with Refinement for Proposals Selection Games in Scene Understanding MCTS对场景理解中的提案选择游戏的细化
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-07-07 DOI: 10.48550/arXiv.2207.03204
Sinisa Stekovic, Mahdi Rad, Alireza Moradi, F. Fraundorfer, V. Lepetit
We propose a novel method applicable in many scene understanding problems that adapts the Monte Carlo Tree Search (MCTS) algorithm, originally designed to learn to play games of high-state complexity. From a generated pool of proposals, our method jointly selects and optimizes proposals that minimize the objective term. In our first application for floor plan reconstruction from point clouds, our method selects and refines the room proposals, modelled as 2D polygons, by optimizing on an objective function combining the fitness as predicted by a deep network and regularizing terms on the room shapes. We also introduce a novel differentiable method for rendering the polygonal shapes of these proposals. Our evaluations on the recent and challenging Structured3D and Floor-SP datasets show significant improvements over the state-of-the-art both in speed and quality of reconstructions, without imposing hard constraints nor assumptions on the floor plan configurations. In our second application, we extend our approach to reconstruct general 3D room layouts from a color image and obtain accurate room layouts. We also show that our differentiable renderer can easily be extended for rendering 3D planar polygons and polygon embeddings. Our method shows high performance on the Matterport3D-Layout dataset, without introducing hard constraints on room layout configurations.
我们提出了一种适用于许多场景理解问题的新方法,该方法适用于蒙特卡洛树搜索(MCTS)算法,该算法最初设计用于学习玩高状态复杂度的游戏。我们的方法从生成的提案库中联合选择和优化提案,最大限度地减少目标期限。在我们从点云重建平面图的第一个应用中,我们的方法通过对目标函数进行优化,结合深度网络预测的适合度和对房间形状的正则化项,来选择和细化建模为2D多边形的房间方案。我们还介绍了一种新的可微方法来绘制这些建议的多边形。我们对最近具有挑战性的Structured3D和Floor SP数据集的评估显示,在重建速度和质量方面都比最先进的数据集有了显著改进,而没有对平面图配置施加硬性限制或假设。在我们的第二个应用程序中,我们扩展了我们的方法,从彩色图像重建一般的3D房间布局,并获得准确的房间布局。我们还展示了我们的可微分渲染器可以很容易地扩展到渲染3D平面多边形和多边形嵌入。我们的方法在Matterport3D布局数据集上显示出高性能,而没有对房间布局配置引入硬约束。
{"title":"MCTS with Refinement for Proposals Selection Games in Scene Understanding","authors":"Sinisa Stekovic, Mahdi Rad, Alireza Moradi, F. Fraundorfer, V. Lepetit","doi":"10.48550/arXiv.2207.03204","DOIUrl":"https://doi.org/10.48550/arXiv.2207.03204","url":null,"abstract":"We propose a novel method applicable in many scene understanding problems that adapts the Monte Carlo Tree Search (MCTS) algorithm, originally designed to learn to play games of high-state complexity. From a generated pool of proposals, our method jointly selects and optimizes proposals that minimize the objective term. In our first application for floor plan reconstruction from point clouds, our method selects and refines the room proposals, modelled as 2D polygons, by optimizing on an objective function combining the fitness as predicted by a deep network and regularizing terms on the room shapes. We also introduce a novel differentiable method for rendering the polygonal shapes of these proposals. Our evaluations on the recent and challenging Structured3D and Floor-SP datasets show significant improvements over the state-of-the-art both in speed and quality of reconstructions, without imposing hard constraints nor assumptions on the floor plan configurations. In our second application, we extend our approach to reconstruct general 3D room layouts from a color image and obtain accurate room layouts. We also show that our differentiable renderer can easily be extended for rendering 3D planar polygons and polygon embeddings. Our method shows high performance on the Matterport3D-Layout dataset, without introducing hard constraints on room layout configurations.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46484927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PS2F: Polarized Spiral Point Spread Function for Single-Shot 3D Sensing 用于单镜头三维传感的偏振螺旋点扩展函数
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-07-03 DOI: 10.48550/arXiv.2207.00945
B. Ghanekar, Vishwanath Saragadam, Dushyant Mehra, A. Gustavsson, Aswin C. Sankaranarayanan, A. Veeraraghavan
We propose a compact snapshot monocular depth estimation technique that relies on an engineered point spread function (PSF). Traditional approaches used in microscopic super-resolution imaging such as the Double-Helix PSF (DHPSF) are ill-suited for scenes that are more complex than a sparse set of point light sources. We show, using the Cramér-Rao lower bound, that separating the two lobes of the DHPSF and thereby capturing two separate images leads to a dramatic increase in depth accuracy. A special property of the phase mask used for generating the DHPSF is that a separation of the phase mask into two halves leads to a spatial separation of the two lobes. We leverage this property to build a compact polarization-based optical setup, where we place two orthogonal linear polarizers on each half of the DHPSF phase mask and then capture the resulting image with a polarization-sensitive camera. Results from simulations and a lab prototype demonstrate that our technique achieves up to 50% lower depth error compared to state-of-the-art designs including the DHPSF and the Tetrapod PSF, with little to no loss in spatial resolution.
我们提出了一种基于工程点扩散函数(PSF)的紧凑快照单目深度估计技术。在微观超分辨率成像中使用的传统方法,如双螺旋PSF(DHPSF),不适合于比稀疏的一组点光源更复杂的场景。我们使用Cramér-Rao下界表明,分离DHPSF的两个波瓣,从而捕获两个单独的图像,可以显著提高深度精度。用于生成DHPSF的相位掩模的一个特殊性质是,将相位掩模分离为两半导致两个波瓣的空间分离。我们利用这一特性构建了一个紧凑的基于偏振的光学设置,在DHPSF相位掩模的每一半上放置两个正交线性偏振器,然后用偏振敏感相机捕捉得到的图像。模拟和实验室原型的结果表明,与包括DHPSF和Tetrapod PSF在内的最先进设计相比,我们的技术实现了高达50%的深度误差降低,空间分辨率几乎没有损失。
{"title":"PS2F: Polarized Spiral Point Spread Function for Single-Shot 3D Sensing","authors":"B. Ghanekar, Vishwanath Saragadam, Dushyant Mehra, A. Gustavsson, Aswin C. Sankaranarayanan, A. Veeraraghavan","doi":"10.48550/arXiv.2207.00945","DOIUrl":"https://doi.org/10.48550/arXiv.2207.00945","url":null,"abstract":"We propose a compact snapshot monocular depth estimation technique that relies on an engineered point spread function (PSF). Traditional approaches used in microscopic super-resolution imaging such as the Double-Helix PSF (DHPSF) are ill-suited for scenes that are more complex than a sparse set of point light sources. We show, using the Cramér-Rao lower bound, that separating the two lobes of the DHPSF and thereby capturing two separate images leads to a dramatic increase in depth accuracy. A special property of the phase mask used for generating the DHPSF is that a separation of the phase mask into two halves leads to a spatial separation of the two lobes. We leverage this property to build a compact polarization-based optical setup, where we place two orthogonal linear polarizers on each half of the DHPSF phase mask and then capture the resulting image with a polarization-sensitive camera. Results from simulations and a lab prototype demonstrate that our technique achieves up to 50% lower depth error compared to state-of-the-art designs including the DHPSF and the Tetrapod PSF, with little to no loss in spatial resolution.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42803700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Graph-Time Convolutional Neural Networks: Architecture and Theoretical Analysis 图-时间卷积神经网络:架构和理论分析
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-30 DOI: 10.48550/arXiv.2206.15174
Mohammad Sabbaqi, E. Isufi
Devising and analysing learning models for spatiotemporal network data is of importance for tasks including forecasting, anomaly detection, and multi-agent coordination, among others. Graph Convolutional Neural Networks (GCNNs) are an established approach to learn from time-invariant network data. The graph convolution operation offers a principled approach to aggregate multi-resolution information in each layer and offers some degree of mathematical analysis by exploring tools from graph signal processing. This analysis provides insights on the equivariance properties of GCNNs; spectral behaviour of the learned filters; and the stability to perturbations in the graph topology, which arises because of support perturbations or uncertainties. However, extending the convolution-principled learning and respective analysis to the spatiotemporal domain is challenging because spatiotemporal data have more intrinsic dependencies. Hence, a higher flexibility to capture jointly the spatial and the temporal dependencies is required to learn meaningful higher-order representations. Here, we leverage product graphs to represent the spatiotemporal dependencies in the data and introduce Graph-Time Convolutional Neural Networks (GTCNNs) as a principled architecture to aid learning. The proposed approach can work with any type of product graph and we also introduce a parametric product graph to learn also the spatiotemporal coupling. The convolution principle further allows a similar mathematical tractability as for GCNNs. In particular, the stability result shows GTCNNs are stable to spatial perturbations but there is an implicit trade-off between discriminability and robustness; i.e., the more complex the model, the less stable. Extensive numerical results on benchmark datasets corroborate our findings and show the GTCNN compares favourably with state-of-the-art solutions. We anticipate the GTCNN to be a starting point for more sophisticated models that achieve good performance but are also fundamentally grounded.
设计和分析时空网络数据的学习模型对于预测、异常检测和多智能体协调等任务具有重要意义。图卷积神经网络(GCNNs)是一种成熟的从时不变网络数据中学习的方法。图卷积操作提供了一种在每层中聚合多分辨率信息的原则方法,并通过探索图信号处理工具提供了一定程度的数学分析。该分析提供了对gcnn的等方差特性的见解;学习滤波器的频谱行为;以及图拓扑中由于支持摄动或不确定性引起的扰动的稳定性。然而,将卷积原则的学习和相应的分析扩展到时空域是具有挑战性的,因为时空数据具有更多的内在依赖性。因此,为了学习有意义的高阶表示,需要更高的灵活性来联合捕获空间和时间依赖关系。在这里,我们利用积图来表示数据中的时空依赖关系,并引入图-时间卷积神经网络(gtcnn)作为辅助学习的原则架构。该方法可以处理任意类型的积图,并引入参数积图来学习时空耦合。卷积原理进一步允许与gcnn相似的数学可追溯性。特别是,稳定性结果表明,gtcnn对空间扰动是稳定的,但在可判别性和鲁棒性之间存在隐式权衡;也就是说,模型越复杂,稳定性越差。在基准数据集上的广泛数值结果证实了我们的发现,并表明GTCNN与最先进的解决方案相比具有优势。我们预计GTCNN将成为更复杂的模型的起点,这些模型可以实现良好的性能,但也可以从根本上扎根。
{"title":"Graph-Time Convolutional Neural Networks: Architecture and Theoretical Analysis","authors":"Mohammad Sabbaqi, E. Isufi","doi":"10.48550/arXiv.2206.15174","DOIUrl":"https://doi.org/10.48550/arXiv.2206.15174","url":null,"abstract":"Devising and analysing learning models for spatiotemporal network data is of importance for tasks including forecasting, anomaly detection, and multi-agent coordination, among others. Graph Convolutional Neural Networks (GCNNs) are an established approach to learn from time-invariant network data. The graph convolution operation offers a principled approach to aggregate multi-resolution information in each layer and offers some degree of mathematical analysis by exploring tools from graph signal processing. This analysis provides insights on the equivariance properties of GCNNs; spectral behaviour of the learned filters; and the stability to perturbations in the graph topology, which arises because of support perturbations or uncertainties. However, extending the convolution-principled learning and respective analysis to the spatiotemporal domain is challenging because spatiotemporal data have more intrinsic dependencies. Hence, a higher flexibility to capture jointly the spatial and the temporal dependencies is required to learn meaningful higher-order representations. Here, we leverage product graphs to represent the spatiotemporal dependencies in the data and introduce Graph-Time Convolutional Neural Networks (GTCNNs) as a principled architecture to aid learning. The proposed approach can work with any type of product graph and we also introduce a parametric product graph to learn also the spatiotemporal coupling. The convolution principle further allows a similar mathematical tractability as for GCNNs. In particular, the stability result shows GTCNNs are stable to spatial perturbations but there is an implicit trade-off between discriminability and robustness; i.e., the more complex the model, the less stable. Extensive numerical results on benchmark datasets corroborate our findings and show the GTCNN compares favourably with state-of-the-art solutions. We anticipate the GTCNN to be a starting point for more sophisticated models that achieve good performance but are also fundamentally grounded.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42238046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
IEEE Transactions on Pattern Analysis and Machine Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1