首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
OTAMatch: Optimal Transport Assignment With PseudoNCE for Semi-Supervised Learning OTAMatch:利用伪 NCE 进行半监督学习的最佳传输分配。
Jinjin Zhang;Junjie Liu;Debang Li;Qiuyu Huang;Jiaxin Chen;Di Huang
In semi-supervised learning (SSL), many approaches follow the effective self-training paradigm with consistency regularization, utilizing threshold heuristics to alleviate label noise. However, such threshold heuristics lead to the underutilization of crucial discriminative information from the excluded data. In this paper, we present OTAMatch, a novel SSL framework that reformulates pseudo-labeling as an optimal transport (OT) assignment problem and simultaneously exploits data with high confidence to mitigate the confirmation bias. Firstly, OTAMatch models the pseudo-label allocation task as a convex minimization problem, facilitating end-to-end optimization with all pseudo-labels and employing the Sinkhorn-Knopp algorithm for efficient approximation. Meanwhile, we incorporate epsilon-greedy posterior regularization and curriculum bias correction strategies to constrain the distribution of OT assignments, improving the robustness with noisy pseudo-labels. Secondly, we propose PseudoNCE, which explicitly exploits pseudo-label consistency with threshold heuristics to maximize mutual information within self-training, significantly boosting the balance of convergence speed and performance. Consequently, our proposed approach achieves competitive performance on various SSL benchmarks. Specifically, OTAMatch substantially outperforms the previous state-of-the-art SSL algorithms in realistic and challenging scenarios, exemplified by a notable 9.45% error rate reduction over SoftMatch on ImageNet with 100K-label split, underlining its robustness and effectiveness.
在半监督学习(SSL)中,许多方法都遵循有效的自我训练范式,利用一致性正则化、阈值启发法来减轻标签噪声。然而,这种阈值启发式方法会导致未充分利用排除数据中的关键判别信息。在本文中,我们提出了 OTAMatch,这是一种新颖的 SSL 框架,它将伪标签重新表述为最优传输(OT)分配问题,并同时利用高置信度数据来减轻确认偏差。首先,OTAMatch 将伪标签分配任务建模为一个凸最小化问题,便于使用所有伪标签进行端到端优化,并采用 Sinkhorn-Knopp 算法进行高效逼近。同时,我们采用ε-贪婪后验正则化和课程偏差校正策略来约束 OT 分配的分布,提高了噪声伪标签的鲁棒性。其次,我们提出了 PseudoNCE,它明确利用伪标签的一致性和阈值启发法,在自我训练中最大化互信息,显著提高了收敛速度和性能之间的平衡。因此,我们提出的方法在各种 SSL 基准上都取得了具有竞争力的性能。具体来说,OTAMatch 在现实和具有挑战性的场景中的表现大大优于之前最先进的 SSL 算法,例如,在 100K 标签分割的 ImageNet 上,OTAMatch 比 SoftMatch 明显降低了 9.45% 的错误率,凸显了其鲁棒性和有效性。
{"title":"OTAMatch: Optimal Transport Assignment With PseudoNCE for Semi-Supervised Learning","authors":"Jinjin Zhang;Junjie Liu;Debang Li;Qiuyu Huang;Jiaxin Chen;Di Huang","doi":"10.1109/TIP.2024.3425174","DOIUrl":"10.1109/TIP.2024.3425174","url":null,"abstract":"In semi-supervised learning (SSL), many approaches follow the effective self-training paradigm with consistency regularization, utilizing threshold heuristics to alleviate label noise. However, such threshold heuristics lead to the underutilization of crucial discriminative information from the excluded data. In this paper, we present OTAMatch, a novel SSL framework that reformulates pseudo-labeling as an optimal transport (OT) assignment problem and simultaneously exploits data with high confidence to mitigate the confirmation bias. Firstly, OTAMatch models the pseudo-label allocation task as a convex minimization problem, facilitating end-to-end optimization with all pseudo-labels and employing the Sinkhorn-Knopp algorithm for efficient approximation. Meanwhile, we incorporate epsilon-greedy posterior regularization and curriculum bias correction strategies to constrain the distribution of OT assignments, improving the robustness with noisy pseudo-labels. Secondly, we propose PseudoNCE, which explicitly exploits pseudo-label consistency with threshold heuristics to maximize mutual information within self-training, significantly boosting the balance of convergence speed and performance. Consequently, our proposed approach achieves competitive performance on various SSL benchmarks. Specifically, OTAMatch substantially outperforms the previous state-of-the-art SSL algorithms in realistic and challenging scenarios, exemplified by a no\u0000<xref>table 9</xref>\u0000.45% error rate reduction over SoftMatch on ImageNet with 100K-label split, underlining its robustness and effectiveness.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HAFormer: Unleashing the Power of Hierarchy-Aware Features for Lightweight Semantic Segmentation HAFormer:为轻量级语义分割释放层次意识特征的力量
Guoan Xu;Wenjing Jia;Tao Wu;Ligeng Chen;Guangwei Gao
Both Convolutional Neural Networks (CNNs) and Transformers have shown great success in semantic segmentation tasks. Efforts have been made to integrate CNNs with Transformer models to capture both local and global context interactions. However, there is still room for enhancement, particularly when considering constraints on computational resources. In this paper, we introduce HAFormer, a model that combines the hierarchical features extraction ability of CNNs with the global dependency modeling capability of Transformers to tackle lightweight semantic segmentation challenges. Specifically, we design a Hierarchy-Aware Pixel-Excitation (HAPE) module for adaptive multi-scale local feature extraction. During the global perception modeling, we devise an Efficient Transformer (ET) module streamlining the quadratic calculations associated with traditional Transformers. Moreover, a correlation-weighted Fusion (cwF) module selectively merges diverse feature representations, significantly enhancing predictive accuracy. HAFormer achieves high performance with minimal computational overhead and compact model size, achieving 74.2% mIoU on Cityscapes and 71.1% mIoU on CamVid test datasets, with frame rates of 105FPS and 118FPS on a single 2080Ti GPU. The source codes are available at https://github.com/XU-GITHUB-curry/HAFormer.
卷积神经网络(CNN)和变换器在语义分割任务中都取得了巨大成功。人们一直在努力将 CNN 与 Transformer 模型相结合,以捕捉局部和全局上下文的交互。但是,仍有改进的余地,尤其是在考虑到计算资源的限制时。在本文中,我们介绍了 HAFormer,这是一种将 CNN 的分层特征提取能力与 Transformer 的全局依赖关系建模能力相结合的模型,用于应对轻量级语义分割挑战。具体来说,我们设计了一个分层感知像素激发(HAPE)模块,用于自适应多尺度局部特征提取。在全局感知建模过程中,我们设计了一个高效变换器(ET)模块,简化了与传统变换器相关的二次计算。此外,相关加权融合(cwF)模块可选择性地合并不同的特征表征,从而显著提高预测准确性。HAFormer 以最小的计算开销和紧凑的模型尺寸实现了高性能,在 Cityscapes 和 CamVid 测试数据集上分别实现了 74.2% 和 71.1% 的 mIoU,单个 2080Ti GPU 的帧速率分别为 105FPS 和 118FPS。
{"title":"HAFormer: Unleashing the Power of Hierarchy-Aware Features for Lightweight Semantic Segmentation","authors":"Guoan Xu;Wenjing Jia;Tao Wu;Ligeng Chen;Guangwei Gao","doi":"10.1109/TIP.2024.3425048","DOIUrl":"10.1109/TIP.2024.3425048","url":null,"abstract":"Both Convolutional Neural Networks (CNNs) and Transformers have shown great success in semantic segmentation tasks. Efforts have been made to integrate CNNs with Transformer models to capture both local and global context interactions. However, there is still room for enhancement, particularly when considering constraints on computational resources. In this paper, we introduce HAFormer, a model that combines the hierarchical features extraction ability of CNNs with the global dependency modeling capability of Transformers to tackle lightweight semantic segmentation challenges. Specifically, we design a Hierarchy-Aware Pixel-Excitation (HAPE) module for adaptive multi-scale local feature extraction. During the global perception modeling, we devise an Efficient Transformer (ET) module streamlining the quadratic calculations associated with traditional Transformers. Moreover, a correlation-weighted Fusion (cwF) module selectively merges diverse feature representations, significantly enhancing predictive accuracy. HAFormer achieves high performance with minimal computational overhead and compact model size, achieving 74.2% mIoU on Cityscapes and 71.1% mIoU on CamVid test datasets, with frame rates of 105FPS and 118FPS on a single 2080Ti GPU. The source codes are available at \u0000<uri>https://github.com/XU-GITHUB-curry/HAFormer</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Domain Adaptation via Domain-Adaptive Diffusion 通过领域自适应扩散实现无监督领域自适应。
Duo Peng;Qiuhong Ke;ArulMurugan Ambikapathi;Yasin Yazici;Yinjie Lei;Jun Liu
Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain. Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task. However, using diffusion models to convert data distribution across different domains is a non-trivial problem as the standard diffusion models generally perform conversion from the Gaussian distribution instead of from a specific domain distribution. Besides, during the conversion, the semantics of the source-domain data needs to be preserved to classify correctly in the target domain. To tackle these problems, we propose a novel Domain-Adaptive Diffusion (DAD) module accompanied by a Mutual Learning Strategy (MLS), which can gradually convert data distribution from the source domain to the target domain while enabling the classification model to learn along the domain transition process. Consequently, our method successfully eases the challenge of UDA by decomposing the large domain gap into small ones and gradually enhancing the capacity of classification model to finally adapt to the target domain. Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.
由于源域和目标域之间的分布差异很大,因此无监督域自适应(UDA)是一项相当具有挑战性的任务。受扩散模型的启发,我们考虑探索扩散技术来处理具有挑战性的 UDA 任务。然而,使用扩散模型来转换不同领域的数据分布是一个非难事,因为标准扩散模型一般是从高斯分布而不是特定领域分布进行转换。此外,在转换过程中,需要保留源域数据的语义,以便在目标域中正确分类。为解决这些问题,我们提出了一种新颖的域自适应扩散(DAD)模块,并辅以相互学习策略(MLS),该模块可将数据分布从源域逐步转换到目标域,同时使分类模型在域转换过程中不断学习。因此,我们的方法通过将大的域差距分解成小的域差距,并逐步增强分类模型最终适应目标域的能力,成功地缓解了 UDA 的挑战。在三个广泛使用的 UDA 数据集上,我们的方法大大优于当前的先进技术。
{"title":"Unsupervised Domain Adaptation via Domain-Adaptive Diffusion","authors":"Duo Peng;Qiuhong Ke;ArulMurugan Ambikapathi;Yasin Yazici;Yinjie Lei;Jun Liu","doi":"10.1109/TIP.2024.3424985","DOIUrl":"10.1109/TIP.2024.3424985","url":null,"abstract":"Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain. Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task. However, using diffusion models to convert data distribution across different domains is a non-trivial problem as the standard diffusion models generally perform conversion from the Gaussian distribution instead of from a specific domain distribution. Besides, during the conversion, the semantics of the source-domain data needs to be preserved to classify correctly in the target domain. To tackle these problems, we propose a novel Domain-Adaptive Diffusion (DAD) module accompanied by a Mutual Learning Strategy (MLS), which can gradually convert data distribution from the source domain to the target domain while enabling the classification model to learn along the domain transition process. Consequently, our method successfully eases the challenge of UDA by decomposing the large domain gap into small ones and gradually enhancing the capacity of classification model to finally adapt to the target domain. Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Long-Tailed Recognition With Contrastive CutMix Augmentation 利用对比剪切混合增强技术提高长尾识别能力
Haolin Pan;Yong Guo;Mianjie Yu;Jian Chen
Real-world data often follows a long-tailed distribution, where a few head classes occupy most of the data and a large number of tail classes only contain very limited samples. In practice, deep models often show poor generalization performance on tail classes due to the imbalanced distribution. To tackle this, data augmentation has become an effective way by synthesizing new samples for tail classes. Among them, one popular way is to use CutMix that explicitly mixups the images of tail classes and the others, while constructing the labels according to the ratio of areas cropped from two images. However, the area-based labels entirely ignore the inherent semantic information of the augmented samples, often leading to misleading training signals. To address this issue, we propose a Contrastive CutMix (ConCutMix) that constructs augmented samples with semantically consistent labels to boost the performance of long-tailed recognition. Specifically, we compute the similarities between samples in the semantic space learned by contrastive learning, and use them to rectify the area-based labels. Experiments show that our ConCutMix significantly improves the accuracy on tail classes as well as the overall performance. For example, based on ResNeXt-50, we improve the overall accuracy on ImageNet-LT by 3.0% thanks to the significant improvement of 3.3% on tail classes. We highlight that the improvement also generalizes well to other benchmarks and models. Our code and pretrained models are available at https://github.com/PanHaulin/ConCutMix.
现实世界的数据通常呈长尾分布,少数头部类别占据了大部分数据,而大量尾部类别只包含非常有限的样本。在实践中,由于分布不平衡,深度模型对尾部类别的泛化性能往往很差。为了解决这个问题,通过为尾部类别合成新样本来增强数据已经成为一种有效的方法。其中,一种流行的方法是使用 CutMix,它明确地将尾类和其他类的图像混合起来,同时根据两张图像的裁剪面积比来构建标签。然而,基于面积的标签完全忽略了增强样本的固有语义信息,往往会导致误导性的训练信号。为了解决这个问题,我们提出了一种对比剪切混杂法(ConCutMix),用语义一致的标签构建增强样本,从而提高长尾识别的性能。具体来说,我们计算通过对比学习获得的语义空间中样本之间的相似性,并利用这些相似性修正基于区域的标签。实验表明,我们的 ConCutMix 能显著提高尾类识别的准确率和整体性能。例如,基于 ResNeXt-50,我们将 ImageNet-LT 的整体准确率提高了 3.0%,这要归功于尾类上 3.3% 的显著提高。我们强调,这种改进也能很好地推广到其他基准和模型中。我们的代码和预训练模型可在 https://github.com/PanHaulin/ConCutMix 上获取。
{"title":"Enhanced Long-Tailed Recognition With Contrastive CutMix Augmentation","authors":"Haolin Pan;Yong Guo;Mianjie Yu;Jian Chen","doi":"10.1109/TIP.2024.3425148","DOIUrl":"10.1109/TIP.2024.3425148","url":null,"abstract":"Real-world data often follows a long-tailed distribution, where a few head classes occupy most of the data and a large number of tail classes only contain very limited samples. In practice, deep models often show poor generalization performance on tail classes due to the imbalanced distribution. To tackle this, data augmentation has become an effective way by synthesizing new samples for tail classes. Among them, one popular way is to use CutMix that explicitly mixups the images of tail classes and the others, while constructing the labels according to the ratio of areas cropped from two images. However, the area-based labels entirely ignore the inherent semantic information of the augmented samples, often leading to misleading training signals. To address this issue, we propose a Contrastive CutMix (ConCutMix) that constructs augmented samples with semantically consistent labels to boost the performance of long-tailed recognition. Specifically, we compute the similarities between samples in the semantic space learned by contrastive learning, and use them to rectify the area-based labels. Experiments show that our ConCutMix significantly improves the accuracy on tail classes as well as the overall performance. For example, based on ResNeXt-50, we improve the overall accuracy on ImageNet-LT by 3.0% thanks to the significant improvement of 3.3% on tail classes. We highlight that the improvement also generalizes well to other benchmarks and models. Our code and pretrained models are available at \u0000<uri>https://github.com/PanHaulin/ConCutMix</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Deep Color Quantization: a Cluster Imitation Approach. 可扩展的深度色彩量化:集群模仿方法
Yunzhong Hou, Stephen Gould, Liang Zheng

Color quantization reduces the number of colors used in an image while preserving its content, which is essential in pixel art and knitting art creation. Traditional methods primarily focus on visual fidelity and treat it as a clustering problem in the RGB space. While effective in large (5-6 bits) color spaces, these approaches cannot guarantee semantics in small (1-2 bits) color spaces. On the other hand, deep color quantization methods use network viewers such as AlexNet and ResNet for supervision, effectively preserving semantics in small color spaces. However, in large color spaces, they lag behind traditional methods in terms of visual fidelity. In this work, we propose ColorCNN+, a novel approach that combines the strengths of both. It uses network viewer signals for supervision in small color spaces and learns to cluster the colors in large color spaces. Noteworthily, it is non-trivial for neural networks to do clustering, where existing deep clustering methods often need K-means to cluster the features. In this work, through a newly introduced cluster imitation loss, ColorCNN+ learns to directly output the cluster assignment without any additional steps. Furthermore, ColorCNN+ supports multiple color space sizes and network viewers, offering scalability and easy deployment. Experimental results demonstrate competitive performance of ColorCNN+ across various settings. Code is available at link.

色彩量化可以减少图像中使用的颜色数量,同时保留图像内容,这对像素艺术和编织艺术创作至关重要。传统方法主要关注视觉保真度,并将其视为 RGB 空间中的聚类问题。这些方法虽然对大(5-6 位)色彩空间有效,但无法保证小(1-2 位)色彩空间的语义。另一方面,深度色彩量化方法使用 AlexNet 和 ResNet 等网络浏览器进行监督,可有效保留小色彩空间中的语义。然而,在大色彩空间中,它们在视觉保真度方面落后于传统方法。在这项工作中,我们提出了 ColorCNN+,一种结合了两者优势的新方法。它在小色彩空间中使用网络查看器信号进行监督,并在大色彩空间中学习对色彩进行聚类。值得注意的是,神经网络进行聚类并非易事,现有的深度聚类方法通常需要 K-means 对特征进行聚类。在这项工作中,通过新引入的聚类模仿损失,ColorCNN+ 学会了直接输出聚类分配,而无需任何额外步骤。此外,ColorCNN+ 还支持多种色彩空间尺寸和网络浏览器,具有可扩展性,易于部署。实验结果表明,ColorCNN+ 在各种环境下都具有很强的竞争力。代码见链接。
{"title":"Scalable Deep Color Quantization: a Cluster Imitation Approach.","authors":"Yunzhong Hou, Stephen Gould, Liang Zheng","doi":"10.1109/TIP.2024.3414132","DOIUrl":"https://doi.org/10.1109/TIP.2024.3414132","url":null,"abstract":"<p><p>Color quantization reduces the number of colors used in an image while preserving its content, which is essential in pixel art and knitting art creation. Traditional methods primarily focus on visual fidelity and treat it as a clustering problem in the RGB space. While effective in large (5-6 bits) color spaces, these approaches cannot guarantee semantics in small (1-2 bits) color spaces. On the other hand, deep color quantization methods use network viewers such as AlexNet and ResNet for supervision, effectively preserving semantics in small color spaces. However, in large color spaces, they lag behind traditional methods in terms of visual fidelity. In this work, we propose ColorCNN+, a novel approach that combines the strengths of both. It uses network viewer signals for supervision in small color spaces and learns to cluster the colors in large color spaces. Noteworthily, it is non-trivial for neural networks to do clustering, where existing deep clustering methods often need K-means to cluster the features. In this work, through a newly introduced cluster imitation loss, ColorCNN+ learns to directly output the cluster assignment without any additional steps. Furthermore, ColorCNN+ supports multiple color space sizes and network viewers, offering scalability and easy deployment. Experimental results demonstrate competitive performance of ColorCNN+ across various settings. Code is available at link.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cayley Rotation Averaging: Multiple Camera Averaging Under the Cayley Framework Cayley 旋转平均:Cayley 框架下的多相机平均法
Qiulei Dong;Shuang Deng;Yuzhen Liu
Rotation averaging, which aims to calculate the absolute rotations of a set of cameras from a redundant set of their relative rotations, is an important and challenging topic arising in the study of structure from motion. A central problem in rotation averaging is how to alleviate the influence of noise and outliers. Addressing this problem, we investigate rotation averaging under the Cayley framework in this paper, inspired by the extra-constraint-free nature of the Cayley rotation representation. Firstly, for the relative rotation of an arbitrary pair of cameras regardless of whether it is corrupted by noise/outliers or not, a general Cayley rotation constraint equation is derived for reflecting the relationship between this relative rotation and the absolute rotations of the two cameras, according to the Cayley rotation representation. Then based on such a set of Cayley rotation constraint equations, a Cayley-based approach for Rotation Averaging is proposed, called CRA, where an adaptive regularizer is designed for further alleviating the influence of outliers. Finally, a unified iterative algorithm for minimizing some commonly-used loss functions is proposed under this approach. Experimental results on 16 real-world datasets and multiple synthetic datasets demonstrate that the proposed CRA approach achieves a better accuracy in comparison to several typical rotation averaging approaches in most cases.
旋转平均法的目的是从一组冗余的相对旋转数据中计算出一组摄像机的绝对旋转数据,它是运动结构研究中的一个重要而又具有挑战性的课题。旋转平均法的一个核心问题是如何减轻噪声和异常值的影响。为了解决这个问题,我们受 Cayley 旋转表示的无约束外特性的启发,在本文中研究了 Cayley 框架下的旋转平均。首先,对于任意一对摄像机的相对旋转,无论其是否受到噪声/异常值的干扰,我们都会根据 Cayley 旋转表示法推导出一个通用的 Cayley 旋转约束方程,用于反映该相对旋转与两台摄像机的绝对旋转之间的关系。然后,根据这组 Cayley 旋转约束方程,提出了一种基于 Cayley 的旋转平均方法,称为 CRA,其中设计了一个自适应正则器,以进一步减轻异常值的影响。最后,在这种方法下提出了一种统一的迭代算法,用于最小化一些常用的损失函数。在 16 个真实数据集和多个合成数据集上的实验结果表明,与几种典型的旋转平均方法相比,所提出的 CRA 方法在大多数情况下都能达到更高的精度。
{"title":"Cayley Rotation Averaging: Multiple Camera Averaging Under the Cayley Framework","authors":"Qiulei Dong;Shuang Deng;Yuzhen Liu","doi":"10.1109/TIP.2024.3416057","DOIUrl":"10.1109/TIP.2024.3416057","url":null,"abstract":"Rotation averaging, which aims to calculate the absolute rotations of a set of cameras from a redundant set of their relative rotations, is an important and challenging topic arising in the study of structure from motion. A central problem in rotation averaging is how to alleviate the influence of noise and outliers. Addressing this problem, we investigate rotation averaging under the Cayley framework in this paper, inspired by the extra-constraint-free nature of the Cayley rotation representation. Firstly, for the relative rotation of an arbitrary pair of cameras regardless of whether it is corrupted by noise/outliers or not, a general Cayley rotation constraint equation is derived for reflecting the relationship between this relative rotation and the absolute rotations of the two cameras, according to the Cayley rotation representation. Then based on such a set of Cayley rotation constraint equations, a Cayley-based approach for Rotation Averaging is proposed, called CRA, where an adaptive regularizer is designed for further alleviating the influence of outliers. Finally, a unified iterative algorithm for minimizing some commonly-used loss functions is proposed under this approach. Experimental results on 16 real-world datasets and multiple synthetic datasets demonstrate that the proposed CRA approach achieves a better accuracy in comparison to several typical rotation averaging approaches in most cases.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Virtual View Selection for 3D Scene Semantic Segmentation 学习虚拟视图选择,实现 3D 场景语义分割。
Tai-Jiang Mu;Ming-Yuan Shen;Yu-Kun Lai;Shi-Min Hu
2D-3D joint learning is essential and effective for fundamental 3D vision tasks, such as 3D semantic segmentation, due to the complementary information these two visual modalities contain. Most current 3D scene semantic segmentation methods process 2D images “as they are”, i.e., only real captured 2D images are used. However, such captured 2D images may be redundant, with abundant occlusion and/or limited field of view (FoV), leading to poor performance for the current methods involving 2D inputs. In this paper, we propose a general learning framework for joint 2D-3D scene understanding by selecting informative virtual 2D views of the underlying 3D scene. We then feed both the 3D geometry and the generated virtual 2D views into any joint 2D-3D-input or pure 3D-input based deep neural models for improving 3D scene understanding. Specifically, we generate virtual 2D views based on an information score map learned from the current 3D scene semantic segmentation results. To achieve this, we formalize the learning of the information score map as a deep reinforcement learning process, which rewards good predictions using a deep neural network. To obtain a compact set of virtual 2D views that jointly cover informative surfaces of the 3D scene as much as possible, we further propose an efficient greedy virtual view coverage strategy in the normal-sensitive 6D space, including 3-dimensional point coordinates and 3-dimensional normal. We have validated our proposed framework for various joint 2D-3D-input or pure 3D-input based deep neural models on two real-world 3D scene datasets, i.e., ScanNet v2 and S3DIS, and the results demonstrate that our method obtains a consistent gain over baseline models and achieves new top accuracy for joint 2D and 3D scene semantic segmentation. Code is available at https://github.com/smy-THU/VirtualViewSelection.
2D-3D 联合学习对于基本的 3D 视觉任务(如 3D 语义分割)至关重要且非常有效,因为这两种视觉模式包含互补信息。目前大多数三维场景语义分割方法都是 "按原样 "处理二维图像,即只使用真实拍摄的二维图像。然而,这些捕捉到的 2D 图像可能是冗余的,存在大量遮挡和/或有限的视场(FoV),导致当前涉及 2D 输入的方法性能不佳。在本文中,我们为 2D-3D 场景联合理解提出了一个通用学习框架,即从底层 3D 场景中选择信息丰富的虚拟 2D 视图。然后,我们将三维几何图形和生成的虚拟二维视图输入任何基于二维-三维联合输入或纯三维输入的深度神经模型,以提高三维场景理解能力。具体来说,我们根据从当前三维场景语义分割结果中学到的信息分数图生成虚拟二维视图。为此,我们将信息评分图的学习形式化为深度强化学习过程,利用深度神经网络对良好预测进行奖励。为了获得一组紧凑的虚拟二维视图,尽可能地共同覆盖三维场景的信息面,我们进一步提出了一种在法线敏感的六维空间(包括三维点坐标和三维法线)中高效的贪婪虚拟视图覆盖策略。我们在两个真实世界的三维场景数据集(即 ScanNet v2 [1] 和 S3DIS [2])上验证了我们提出的各种基于 2D-3D 联合输入或纯 3D 输入的深度神经模型框架,结果表明,我们的方法比基线模型获得了一致的增益,并在 2D 和 3D 场景语义联合分割方面达到了新的最高精度。代码见 https://github.com/smy-THU/VirtualViewSelection。
{"title":"Learning Virtual View Selection for 3D Scene Semantic Segmentation","authors":"Tai-Jiang Mu;Ming-Yuan Shen;Yu-Kun Lai;Shi-Min Hu","doi":"10.1109/TIP.2024.3421952","DOIUrl":"10.1109/TIP.2024.3421952","url":null,"abstract":"2D-3D joint learning is essential and effective for fundamental 3D vision tasks, such as 3D semantic segmentation, due to the complementary information these two visual modalities contain. Most current 3D scene semantic segmentation methods process 2D images “as they are”, i.e., only real captured 2D images are used. However, such captured 2D images may be redundant, with abundant occlusion and/or limited field of view (FoV), leading to poor performance for the current methods involving 2D inputs. In this paper, we propose a general learning framework for joint 2D-3D scene understanding by selecting informative virtual 2D views of the underlying 3D scene. We then feed both the 3D geometry and the generated virtual 2D views into any joint 2D-3D-input or pure 3D-input based deep neural models for improving 3D scene understanding. Specifically, we generate virtual 2D views based on an information score map learned from the current 3D scene semantic segmentation results. To achieve this, we formalize the learning of the information score map as a deep reinforcement learning process, which rewards good predictions using a deep neural network. To obtain a compact set of virtual 2D views that jointly cover informative surfaces of the 3D scene as much as possible, we further propose an efficient greedy virtual view coverage strategy in the normal-sensitive 6D space, including 3-dimensional point coordinates and 3-dimensional normal. We have validated our proposed framework for various joint 2D-3D-input or pure 3D-input based deep neural models on two real-world 3D scene datasets, i.e., ScanNet v2 and S3DIS, and the results demonstrate that our method obtains a consistent gain over baseline models and achieves new top accuracy for joint 2D and 3D scene semantic segmentation. Code is available at \u0000<uri>https://github.com/smy-THU/VirtualViewSelection</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Low-Light Light Field Images With a Deep Compensation Unfolding Network 利用深度补偿展开网络增强弱光下的光场图像
Xianqiang Lyu;Junhui Hou
This paper presents a novel and interpretable end-to-end learning framework, called the deep compensation unfolding network (DCUNet), for restoring light field (LF) images captured under low-light conditions. DCUNet is designed with a multi-stage architecture that mimics the optimization process of solving an inverse imaging problem in a data-driven fashion. The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result. Additionally, DCUNet includes a content-associated deep compensation module at each optimization stage to suppress noise and illumination map estimation errors. To properly mine and leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module that comprehensively exploits redundant information in LF images. The experimental results on both simulated and real datasets demonstrate the superiority of our DCUNet over state-of-the-art methods, both qualitatively and quantitatively. Moreover, DCUNet preserves the essential geometric structure of enhanced LF images much better. The code is publicly available at https://github.com/lyuxianqiang/LFLL-DCU.
本文提出了一种新颖的、可解释的端到端学习框架,称为深度补偿展开网络(DCUNet),用于恢复在弱光条件下捕获的光场(LF)图像。DCUNet 采用多阶段架构设计,模仿了以数据驱动方式解决逆成像问题的优化过程。该框架使用中间增强结果来估算光照图,然后在展开过程中使用光照图生成新的增强结果。此外,DCUNet 在每个优化阶段都包含一个与内容相关的深度补偿模块,以抑制噪声和光照图估计误差。为了正确挖掘和利用低频图像的独特性,本文提出了一种伪显式特征交互模块,可全面利用低频图像中的冗余信息。在模拟和真实数据集上的实验结果表明,DCUNet 在质量和数量上都优于最先进的方法。此外,DCUNet 还能更好地保留增强低频图像的基本几何结构。代码可在 https://github.com/lyuxianqiang/LFLL-DCU 公开获取。
{"title":"Enhancing Low-Light Light Field Images With a Deep Compensation Unfolding Network","authors":"Xianqiang Lyu;Junhui Hou","doi":"10.1109/TIP.2024.3420797","DOIUrl":"10.1109/TIP.2024.3420797","url":null,"abstract":"This paper presents a novel and interpretable end-to-end learning framework, called the deep compensation unfolding network (DCUNet), for restoring light field (LF) images captured under low-light conditions. DCUNet is designed with a multi-stage architecture that mimics the optimization process of solving an inverse imaging problem in a data-driven fashion. The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result. Additionally, DCUNet includes a content-associated deep compensation module at each optimization stage to suppress noise and illumination map estimation errors. To properly mine and leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module that comprehensively exploits redundant information in LF images. The experimental results on both simulated and real datasets demonstrate the superiority of our DCUNet over state-of-the-art methods, both qualitatively and quantitatively. Moreover, DCUNet preserves the essential geometric structure of enhanced LF images much better. The code is publicly available at \u0000<uri>https://github.com/lyuxianqiang/LFLL-DCU</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141536193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One-Shot Any-Scene Crowd Counting With Local-to-Global Guidance. 具有本地到全球指导功能的单次任何场景人群计数。
Jiwei Chen, Zengfu Wang

Due to different installation angles, heights, and positions of the camera installation in real-world scenes, it is difficult for crowd counting models to work in unseen surveillance scenes. In this paper, we are interested in accurate crowd counting based on the data collected by any surveillance camera, that is to count the crowd from any scene given only one annotated image from that scene. To this end, we firstly pose crowd counting as a one-shot learning task. Through the metric-learning, we propose a simple yet effective method that firstly estimates crowd characteristics and then transfers them to guide the model to count the crowd. Specifically, to fully capture these crowd characteristics of the target scene, we devise the Multi-Prototype Learner to learn the prototypes of foreground and density from the limited support image using the Expectation-Maximization algorithm. To learn the adaptation capability for any unseen scene, estimated multi prototypes are proposed to guide the crowd counting of query images in a local-to-global way. CNN is utilized to activate the local features. And transformer is introduced to correlate global features. Extensive experiments on three surveillance datasets suggest that our method outperforms the SOTA methods in the few-shot crowd counting.

由于真实世界场景中摄像机安装的角度、高度和位置不同,人群计数模型很难在未见监控场景中工作。在本文中,我们感兴趣的是基于任意监控摄像机收集的数据进行精确的人群计数,即在仅有一张来自任意场景的注释图像的情况下,对该场景中的人群进行计数。为此,我们首先将人群计数假设为一次学习任务。通过度量学习,我们提出了一种简单而有效的方法,即首先估计人群特征,然后将这些特征用于指导模型对人群进行计数。具体来说,为了充分捕捉目标场景的人群特征,我们设计了多原型学习器,利用期望最大化算法从有限的支持图像中学习前景和密度的原型。为了学习对任何未见场景的适应能力,我们提出了估计多原型,以从局部到全局的方式指导查询图像的人群计数。利用 CNN 激活本地特征。并引入变换器来关联全局特征。在三个监控数据集上进行的大量实验表明,我们的方法在少镜头人群计数方面优于 SOTA 方法。
{"title":"One-Shot Any-Scene Crowd Counting With Local-to-Global Guidance.","authors":"Jiwei Chen, Zengfu Wang","doi":"10.1109/TIP.2024.3420713","DOIUrl":"https://doi.org/10.1109/TIP.2024.3420713","url":null,"abstract":"<p><p>Due to different installation angles, heights, and positions of the camera installation in real-world scenes, it is difficult for crowd counting models to work in unseen surveillance scenes. In this paper, we are interested in accurate crowd counting based on the data collected by any surveillance camera, that is to count the crowd from any scene given only one annotated image from that scene. To this end, we firstly pose crowd counting as a one-shot learning task. Through the metric-learning, we propose a simple yet effective method that firstly estimates crowd characteristics and then transfers them to guide the model to count the crowd. Specifically, to fully capture these crowd characteristics of the target scene, we devise the Multi-Prototype Learner to learn the prototypes of foreground and density from the limited support image using the Expectation-Maximization algorithm. To learn the adaptation capability for any unseen scene, estimated multi prototypes are proposed to guide the crowd counting of query images in a local-to-global way. CNN is utilized to activate the local features. And transformer is introduced to correlate global features. Extensive experiments on three surveillance datasets suggest that our method outperforms the SOTA methods in the few-shot crowd counting.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141536194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectral Embedding Fusion for Incomplete Multiview Clustering 不完整多视图聚类的光谱嵌入融合。
Jie Chen;Yingke Chen;Zhu Wang;Haixian Zhang;Xi Peng
Incomplete multiview clustering (IMVC) aims to reveal the underlying structure of incomplete multiview data by partitioning data samples into clusters. Several graph-based methods exhibit a strong ability to explore high-order information among multiple views using low-rank tensor learning. However, spectral embedding fusion of multiple views is ignored in low-rank tensor learning. In addition, addressing missing instances or features is still an intractable problem for most existing IMVC methods. In this paper, we present a unified spectral embedding tensor learning (USETL) framework that integrates the spectral embedding fusion of multiple similarity graphs and spectral embedding tensor learning for IMVC. To remove redundant information from the original incomplete multiview data, spectral embedding fusion is performed by introducing spectral rotations at two different data levels, i.e., the spectral embedding feature level and the clustering indicator level. The aim of introducing spectral embedding tensor learning is to capture consistent and complementary information by seeking high-order correlations among multiple views. The strategy of removing missing instances is adopted to construct multiple similarity graphs for incomplete multiple views. Consequently, this strategy provides an intuitive and feasible way to construct multiple similarity graphs. Extensive experimental results on multiview datasets demonstrate the effectiveness of the two spectral embedding fusion methods within the USETL framework.
不完整多视图聚类(IMVC)旨在通过将数据样本划分为聚类来揭示不完整多视图数据的潜在结构。有几种基于图的方法显示出了利用低阶张量学习探索多视图间高阶信息的强大能力。然而,低阶张量学习忽略了多视图的光谱嵌入融合。此外,对于大多数现有的 IMVC 方法来说,解决实例或特征缺失仍然是一个难以解决的问题。在本文中,我们提出了一种统一的光谱嵌入张量学习(USETL)框架,它将多个相似性图的光谱嵌入融合和光谱嵌入张量学习整合在一起,用于 IMVC。为了去除原始不完整多视图数据中的冗余信息,光谱嵌入融合是通过在两个不同的数据级别(即光谱嵌入特征级别和聚类指标级别)引入光谱旋转来实现的。引入光谱嵌入张量学习的目的是通过寻求多个视图之间的高阶相关性来捕捉一致和互补的信息。采用剔除缺失实例的策略为不完整的多视图构建多个相似性图。因此,这种策略为构建多重相似性图提供了一种直观可行的方法。在多视图数据集上的大量实验结果证明了这两种光谱嵌入融合方法在 USETL 框架内的有效性。
{"title":"Spectral Embedding Fusion for Incomplete Multiview Clustering","authors":"Jie Chen;Yingke Chen;Zhu Wang;Haixian Zhang;Xi Peng","doi":"10.1109/TIP.2024.3420796","DOIUrl":"10.1109/TIP.2024.3420796","url":null,"abstract":"Incomplete multiview clustering (IMVC) aims to reveal the underlying structure of incomplete multiview data by partitioning data samples into clusters. Several graph-based methods exhibit a strong ability to explore high-order information among multiple views using low-rank tensor learning. However, spectral embedding fusion of multiple views is ignored in low-rank tensor learning. In addition, addressing missing instances or features is still an intractable problem for most existing IMVC methods. In this paper, we present a unified spectral embedding tensor learning (USETL) framework that integrates the spectral embedding fusion of multiple similarity graphs and spectral embedding tensor learning for IMVC. To remove redundant information from the original incomplete multiview data, spectral embedding fusion is performed by introducing spectral rotations at two different data levels, i.e., the spectral embedding feature level and the clustering indicator level. The aim of introducing spectral embedding tensor learning is to capture consistent and complementary information by seeking high-order correlations among multiple views. The strategy of removing missing instances is adopted to construct multiple similarity graphs for incomplete multiple views. Consequently, this strategy provides an intuitive and feasible way to construct multiple similarity graphs. Extensive experimental results on multiview datasets demonstrate the effectiveness of the two spectral embedding fusion methods within the USETL framework.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141536215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1