首页 > 最新文献

Computer Vision and Image Understanding最新文献

英文 中文
Seam estimation based on dense matching for parallax-tolerant image stitching 基于密集匹配的接缝估计,用于视差容忍图像拼接
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.cviu.2024.104219
Zhihao Zhang , Jie He , Mouquan Shen , Xianqiang Yang
Image stitching with large parallax poses a significant challenge in the field of computer vision. Existing seam-based approaches attempt to address parallax artifacts by stitching images along seams. However, issues such as object mismatches, disappearances, and duplications still arise occasionally, primarily due to inaccurate alignment of dense pixels or inappropriate seam estimation methods. In this paper, we propose a robust seam-based parallax-tolerant image stitching method that leverages dense flow estimation from state-of-the-art approaches. Firstly, we develop a seam estimation method that does not require pre-estimation of image warping model. Instead, it directly estimates the seam by measuring the local smoothness of the optical flow field and incorporating a penalty term for duplications. Subsequently, we design an iterative algorithm that utilizes the location of estimated seam to solve a spatial smooth warping model and eliminate outlier corresponding pairs. By employing this approach, we effectively address the intertwined challenges of estimating the warping model and seam. Experiment on real-world images shows that our proposed method achieves superior local alignment accuracy near the stitching seam and outperforms other state-of-the-art techniques on visual stitching result. Code is available at https://github.com/zhihao0512/dense-matching-image-stitching.
大视差图像拼接是计算机视觉领域的一项重大挑战。现有的基于接缝的方法试图通过沿接缝拼接图像来解决视差伪影问题。然而,偶尔仍会出现物体不匹配、消失和重复等问题,这主要是由于密集像素对齐不准确或接缝估计方法不当造成的。在本文中,我们提出了一种基于接缝的鲁棒视差容错图像拼接方法,该方法利用了最先进方法中的密集流估计。首先,我们开发了一种无需预先估计图像扭曲模型的接缝估计方法。相反,该方法通过测量光流场的局部平滑度,并加入对重复的惩罚项,直接估算接缝。随后,我们设计了一种迭代算法,利用估算出的接缝位置来求解空间平滑翘曲模型,并消除离群的对应对。通过采用这种方法,我们有效地解决了估算翘曲模型和接缝这两个相互交织的难题。在真实图像上的实验表明,我们提出的方法在拼接缝附近实现了卓越的局部对齐精度,在视觉拼接效果上优于其他最先进的技术。代码见 https://github.com/zhihao0512/dense-matching-image-stitching。
{"title":"Seam estimation based on dense matching for parallax-tolerant image stitching","authors":"Zhihao Zhang ,&nbsp;Jie He ,&nbsp;Mouquan Shen ,&nbsp;Xianqiang Yang","doi":"10.1016/j.cviu.2024.104219","DOIUrl":"10.1016/j.cviu.2024.104219","url":null,"abstract":"<div><div>Image stitching with large parallax poses a significant challenge in the field of computer vision. Existing seam-based approaches attempt to address parallax artifacts by stitching images along seams. However, issues such as object mismatches, disappearances, and duplications still arise occasionally, primarily due to inaccurate alignment of dense pixels or inappropriate seam estimation methods. In this paper, we propose a robust seam-based parallax-tolerant image stitching method that leverages dense flow estimation from state-of-the-art approaches. Firstly, we develop a seam estimation method that does not require pre-estimation of image warping model. Instead, it directly estimates the seam by measuring the local smoothness of the optical flow field and incorporating a penalty term for duplications. Subsequently, we design an iterative algorithm that utilizes the location of estimated seam to solve a spatial smooth warping model and eliminate outlier corresponding pairs. By employing this approach, we effectively address the intertwined challenges of estimating the warping model and seam. Experiment on real-world images shows that our proposed method achieves superior local alignment accuracy near the stitching seam and outperforms other state-of-the-art techniques on visual stitching result. Code is available at <span><span>https://github.com/zhihao0512/dense-matching-image-stitching</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"250 ","pages":"Article 104219"},"PeriodicalIF":4.3,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142654048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monocular depth estimation with boundary attention mechanism and Shifted Window Adaptive Bins 利用边界注意机制和移位窗口自适应分层进行单目深度估算
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.cviu.2024.104220
Hengjia Hu , Mengnan Liang , Congcong Wang, Meng Zhao, Fan Shi, Chao Zhang, Yilin Han
Monocular depth estimation is a classic research topic in computer vision. In recent years, development of Convolutional Neural Networks (CNNs) has facilitated significant breakthroughs in this field. However, there still exist two challenges: (1) The network struggles to effectively fuse edge features in the feature fusion stage, which ultimately results in the loss of structure or boundary distortion of objects in the scene. (2) Classification based studies typically depend on Transformers for global modeling, a process that often introduces substantial computational complexity overhead as described in Equation 2. In this paper, we propose two modules to address the aforementioned issues. The first module is the Boundary Attention Module (BAM), which leverages the attention mechanism to enhance the ability of the network to perceive object boundaries during the feature fusion stage. In addition, to mitigate the computational complexity overhead resulting from predicting adaptive bins, we propose a Shift Window Adaptive Bins (SWAB) module to reduce the amount of computation in global modeling. The proposed method is evaluated on three public datasets, NYU Depth V2, KITTI and SUNRGB-D, and demonstrates state-of-the-art (SOTA) performance.
单目深度估计是计算机视觉领域的一个经典研究课题。近年来,卷积神经网络(CNN)的发展促进了这一领域的重大突破。然而,目前仍存在两个挑战:(1)在特征融合阶段,网络难以有效融合边缘特征,最终导致场景中物体的结构丢失或边界失真。(2) 基于分类的研究通常依赖变换器进行全局建模,而这一过程通常会带来大量的计算复杂度开销,如等式 2 所述。在本文中,我们提出了两个模块来解决上述问题。第一个模块是边界注意模块(BAM),它利用注意机制来增强网络在特征融合阶段感知物体边界的能力。此外,为了减轻预测自适应分层带来的计算复杂度开销,我们提出了移位窗口自适应分层(SWAB)模块,以减少全局建模的计算量。我们在纽约大学深度 V2、KITTI 和 SUNRGB-D 这三个公共数据集上对所提出的方法进行了评估,结果表明该方法具有最先进的 (SOTA) 性能。
{"title":"Monocular depth estimation with boundary attention mechanism and Shifted Window Adaptive Bins","authors":"Hengjia Hu ,&nbsp;Mengnan Liang ,&nbsp;Congcong Wang,&nbsp;Meng Zhao,&nbsp;Fan Shi,&nbsp;Chao Zhang,&nbsp;Yilin Han","doi":"10.1016/j.cviu.2024.104220","DOIUrl":"10.1016/j.cviu.2024.104220","url":null,"abstract":"<div><div>Monocular depth estimation is a classic research topic in computer vision. In recent years, development of Convolutional Neural Networks (CNNs) has facilitated significant breakthroughs in this field. However, there still exist two challenges: (1) The network struggles to effectively fuse edge features in the feature fusion stage, which ultimately results in the loss of structure or boundary distortion of objects in the scene. (2) Classification based studies typically depend on Transformers for global modeling, a process that often introduces substantial computational complexity overhead as described in Equation 2. In this paper, we propose two modules to address the aforementioned issues. The first module is the Boundary Attention Module (BAM), which leverages the attention mechanism to enhance the ability of the network to perceive object boundaries during the feature fusion stage. In addition, to mitigate the computational complexity overhead resulting from predicting adaptive bins, we propose a Shift Window Adaptive Bins (SWAB) module to reduce the amount of computation in global modeling. The proposed method is evaluated on three public datasets, NYU Depth V2, KITTI and SUNRGB-D, and demonstrates state-of-the-art (SOTA) performance.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104220"},"PeriodicalIF":4.3,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multivariate prototype representation for domain-generalized incremental learning 用于领域泛化增量学习的多变量原型表示法
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-30 DOI: 10.1016/j.cviu.2024.104215
Can Peng , Piotr Koniusz , Kaiyu Guo , Brian C. Lovell , Peyman Moghadam
Deep learning models often suffer from catastrophic forgetting when fine-tuned with samples of new classes. This issue becomes even more challenging when there is a domain shift between training and testing data. In this paper, we address the critical yet less explored Domain-Generalized Class-Incremental Learning (DGCIL) task. We propose a DGCIL approach designed to memorize old classes, adapt to new classes, and reliably classify objects from unseen domains. Specifically, our loss formulation maintains classification boundaries while suppressing domain-specific information for each class. Without storing old exemplars, we employ knowledge distillation and estimate the drift of old class prototypes as incremental training progresses. Our prototype representations are based on multivariate Normal distributions, with means and covariances continually adapted to reflect evolving model features, providing effective representations for old classes. We then sample pseudo-features for these old classes from the adapted Normal distributions using Cholesky decomposition. Unlike previous pseudo-feature sampling strategies that rely solely on average mean prototypes, our method captures richer semantic variations. Experiments on several benchmarks demonstrate the superior performance of our method compared to the state of the art.
深度学习模型在使用新类别样本进行微调时,经常会出现灾难性遗忘。当训练数据和测试数据之间发生领域转换时,这个问题就变得更具挑战性。在本文中,我们将讨论领域通用类增量学习(DGCIL)这一关键但探索较少的任务。我们提出了一种 DGCIL 方法,旨在记忆旧类、适应新类,并对来自未见领域的对象进行可靠分类。具体来说,我们的损失公式在保持分类界限的同时,抑制了每个类别的特定领域信息。在不存储旧范例的情况下,我们采用知识提炼的方法,并随着增量训练的进行估算旧类原型的漂移。我们的原型表示法基于多元正态分布,其均值和协方差会不断调整以反映不断变化的模型特征,从而为旧类提供有效的表示法。然后,我们使用 Cholesky 分解法从调整后的正态分布中抽取这些旧类的伪特征。与以往仅依赖平均值原型的伪特征采样策略不同,我们的方法能捕捉到更丰富的语义变化。在多个基准上进行的实验证明,与现有技术相比,我们的方法性能更优越。
{"title":"Multivariate prototype representation for domain-generalized incremental learning","authors":"Can Peng ,&nbsp;Piotr Koniusz ,&nbsp;Kaiyu Guo ,&nbsp;Brian C. Lovell ,&nbsp;Peyman Moghadam","doi":"10.1016/j.cviu.2024.104215","DOIUrl":"10.1016/j.cviu.2024.104215","url":null,"abstract":"<div><div>Deep learning models often suffer from catastrophic forgetting when fine-tuned with samples of new classes. This issue becomes even more challenging when there is a domain shift between training and testing data. In this paper, we address the critical yet less explored Domain-Generalized Class-Incremental Learning (DGCIL) task. We propose a DGCIL approach designed to memorize old classes, adapt to new classes, and reliably classify objects from unseen domains. Specifically, our loss formulation maintains classification boundaries while suppressing domain-specific information for each class. Without storing old exemplars, we employ knowledge distillation and estimate the drift of old class prototypes as incremental training progresses. Our prototype representations are based on multivariate Normal distributions, with means and covariances continually adapted to reflect evolving model features, providing effective representations for old classes. We then sample pseudo-features for these old classes from the adapted Normal distributions using Cholesky decomposition. Unlike previous pseudo-feature sampling strategies that rely solely on average mean prototypes, our method captures richer semantic variations. Experiments on several benchmarks demonstrate the superior performance of our method compared to the state of the art.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104215"},"PeriodicalIF":4.3,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion Models for Counterfactual Explanations 用于反事实解释的扩散模型
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-29 DOI: 10.1016/j.cviu.2024.104207
Guillaume Jeanneret, Loïc Simon, Frédéric Jurie
Counterfactual explanations have demonstrated promising results as a post-hoc framework to improve the explanatory power of image classifiers. Herein, this paper proposes DiME, a method that allows the generation of counterfactual images using the latest diffusion models. The proposed method uses a guided generative diffusion process to exploit the gradients of the target classifier to generate counterfactual explanations of the input instances. Furthermore, we examine present strategies for assessing spurious correlations and expand the assessment methods by presenting a novel measure, Correlation Difference, which is more efficient at detecting such correlations. The provided work includes a comprehensive ablation study and a thorough experimental validation demonstrating that the proposed algorithm outperforms previous state-of-the-art results on the CelebA, CelebAHQ and BDD100k datasets.
反事实解释作为一种事后框架,在提高图像分类器的解释能力方面取得了可喜的成果。本文提出的 DiME 是一种利用最新扩散模型生成反事实图像的方法。该方法使用引导生成扩散过程,利用目标分类器的梯度生成输入实例的反事实解释。此外,我们还研究了评估虚假相关性的现有策略,并提出了一种新的评估方法--相关性差异,它能更有效地检测出此类相关性,从而扩展了评估方法。所提供的工作包括全面的消融研究和彻底的实验验证,证明所提出的算法在 CelebA、CelebAHQ 和 BDD100k 数据集上的表现优于之前最先进的结果。
{"title":"Diffusion Models for Counterfactual Explanations","authors":"Guillaume Jeanneret,&nbsp;Loïc Simon,&nbsp;Frédéric Jurie","doi":"10.1016/j.cviu.2024.104207","DOIUrl":"10.1016/j.cviu.2024.104207","url":null,"abstract":"<div><div>Counterfactual explanations have demonstrated promising results as a post-hoc framework to improve the explanatory power of image classifiers. Herein, this paper proposes DiME, a method that allows the generation of counterfactual images using the latest diffusion models. The proposed method uses a guided generative diffusion process to exploit the gradients of the target classifier to generate counterfactual explanations of the input instances. Furthermore, we examine present strategies for assessing spurious correlations and expand the assessment methods by presenting a novel measure, Correlation Difference, which is more efficient at detecting such correlations. The provided work includes a comprehensive ablation study and a thorough experimental validation demonstrating that the proposed algorithm outperforms previous state-of-the-art results on the CelebA, CelebAHQ and BDD100k datasets.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104207"},"PeriodicalIF":4.3,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D scene generation for zero-shot learning using ChatGPT guided language prompts 利用 ChatGPT 引导式语言提示为零镜头学习生成 3D 场景
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-29 DOI: 10.1016/j.cviu.2024.104211
Sahar Ahmadi , Ali Cheraghian , Townim Faisal Chowdhury , Morteza Saberi , Shafin Rahman
Zero-shot learning in the realm of 3D point cloud data remains relatively unexplored compared to its 2D image counterpart. This domain introduces fresh challenges due to the absence of robust pre-trained feature extraction models. To tackle this, we introduce a prompt-guided method for 3D scene generation and supervision, enhancing the network’s ability to comprehend the intricate relationships between seen and unseen objects. Initially, we utilize basic prompts resembling scene annotations generated from one or two point cloud objects. Recognizing the limited diversity of basic prompts, we employ ChatGPT to expand them, enriching the contextual information within the descriptions. Subsequently, leveraging these descriptions, we arrange point cloud objects’ coordinates to fabricate augmented 3D scenes. Lastly, employing contrastive learning, we train our proposed architecture end-to-end, utilizing pairs of 3D scenes and prompt-based captions. We posit that 3D scenes facilitate more efficient object relationships than individual objects, as demonstrated by the effectiveness of language models like BERT in contextual understanding. Our prompt-guided scene generation method amalgamates data augmentation and prompt-based annotation, thereby enhancing 3D ZSL performance. We present ZSL and generalized ZSL results on both synthetic (ModelNet40, ModelNet10, and ShapeNet) and real-scanned (ScanOjbectNN) 3D object datasets. Furthermore, we challenge the model by training with synthetic data and testing with real-scanned data, achieving state-of-the-art performance compared to existing 2D and 3D ZSL methods in the literature. Codes and models are available at: https://github.com/saharahmadisohraviyeh/ChatGPT_ZSL_3D.
与二维图像相比,三维点云数据领域的零点学习仍处于相对探索阶段。由于缺乏稳健的预训练特征提取模型,这一领域面临着新的挑战。为了解决这个问题,我们引入了一种用于三维场景生成和监督的提示引导方法,以增强网络理解可见物体和未见物体之间错综复杂关系的能力。最初,我们使用的基本提示类似于由一个或两个点云对象生成的场景注释。由于基本提示的多样性有限,我们采用 ChatGPT 对其进行扩展,丰富了描述中的上下文信息。随后,利用这些描述,我们排列点云对象的坐标,创建增强 3D 场景。最后,通过对比学习,我们利用成对的三维场景和基于提示的字幕,对我们提出的架构进行端到端训练。我们认为,与单个物体相比,三维场景能更有效地促进物体关系,BERT 等语言模型在上下文理解方面的有效性就证明了这一点。我们的提示引导场景生成方法融合了数据增强和基于提示的注释,从而提高了 3D ZSL 的性能。我们展示了在合成(ModelNet40、ModelNet10 和 ShapeNet)和真实扫描(ScanOjbectNN)三维物体数据集上的 ZSL 和广义 ZSL 结果。此外,我们通过合成数据训练和真实扫描数据测试对模型进行了挑战,与文献中现有的二维和三维 ZSL 方法相比,取得了最先进的性能。代码和模型请访问:https://github.com/saharahmadisohraviyeh/ChatGPT_ZSL_3D。
{"title":"3D scene generation for zero-shot learning using ChatGPT guided language prompts","authors":"Sahar Ahmadi ,&nbsp;Ali Cheraghian ,&nbsp;Townim Faisal Chowdhury ,&nbsp;Morteza Saberi ,&nbsp;Shafin Rahman","doi":"10.1016/j.cviu.2024.104211","DOIUrl":"10.1016/j.cviu.2024.104211","url":null,"abstract":"<div><div>Zero-shot learning in the realm of 3D point cloud data remains relatively unexplored compared to its 2D image counterpart. This domain introduces fresh challenges due to the absence of robust pre-trained feature extraction models. To tackle this, we introduce a prompt-guided method for 3D scene generation and supervision, enhancing the network’s ability to comprehend the intricate relationships between seen and unseen objects. Initially, we utilize basic prompts resembling scene annotations generated from one or two point cloud objects. Recognizing the limited diversity of basic prompts, we employ ChatGPT to expand them, enriching the contextual information within the descriptions. Subsequently, leveraging these descriptions, we arrange point cloud objects’ coordinates to fabricate augmented 3D scenes. Lastly, employing contrastive learning, we train our proposed architecture end-to-end, utilizing pairs of 3D scenes and prompt-based captions. We posit that 3D scenes facilitate more efficient object relationships than individual objects, as demonstrated by the effectiveness of language models like BERT in contextual understanding. Our prompt-guided scene generation method amalgamates data augmentation and prompt-based annotation, thereby enhancing 3D ZSL performance. We present ZSL and generalized ZSL results on both synthetic (ModelNet40, ModelNet10, and ShapeNet) and real-scanned (ScanOjbectNN) 3D object datasets. Furthermore, we challenge the model by training with synthetic data and testing with real-scanned data, achieving state-of-the-art performance compared to existing 2D and 3D ZSL methods in the literature. Codes and models are available at: <span><span>https://github.com/saharahmadisohraviyeh/ChatGPT_ZSL_3D</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104211"},"PeriodicalIF":4.3,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A large corpus for the recognition of Greek Sign Language gestures 识别希腊手语手势的大型语料库
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-29 DOI: 10.1016/j.cviu.2024.104212
Katerina Papadimitriou , Galini Sapountzaki , Kyriaki Vasilaki , Eleni Efthimiou , Stavroula-Evita Fotinea , Gerasimos Potamianos
Sign language recognition (SLR) from videos constitutes a captivating problem in gesture recognition, requiring the interpretation of hand movements, facial expressions, and body postures. The complexity of sign formation, signing variability among signers, and the technical hurdles of visual detection and tracking render SLR a challenging task. At the same time, the scarcity of large-scale SLR datasets, which are critical for developing robust data-intensive deep-learning SLR models, exacerbates these issues. In this article, we introduce a multi-signer video corpus of Greek Sign Language (GSL), which is the largest GSL database to date, serving as a valuable resource for SLR research. This corpus comprises an extensive RGB+D video collection that conveys rich lexical content in a multi-modal fashion, encompassing three subsets: (i) isolated signs; (ii) continuous signing; and (iii) continuous alphabet fingerspelling of words. Moreover, we introduce a comprehensive experimental setup that paves the way for more accurate and robust SLR solutions. In particular, except for the multi-signer (MS) and signer-independent (SI) settings, we employ a signer-adapted (SA) experimental paradigm, facilitating a comprehensive evaluation of system performance across various scenarios. Further, we provide three baseline SLR systems for isolated signs, continuous signing, and continuous fingerspelling. These systems leverage cutting-edge methods in deep learning and sequence modeling to capture the intricate temporal dynamics inherent in sign gestures. The models are evaluated on the three corpus subsets, setting their state-of-the-art recognition benchmark. The SL-ReDu GSL corpus, including its recommended experimental frameworks, is publicly available at https://sl-redu.e-ce.uth.gr/corpus.
视频手语识别(SLR)是手势识别领域的一个难题,需要对手部动作、面部表情和身体姿势进行解读。手势形成的复杂性、手语者之间的差异性以及视觉检测和跟踪的技术障碍使得手语识别成为一项具有挑战性的任务。同时,大规模 SLR 数据集的缺乏也加剧了这些问题,而大规模 SLR 数据集对于开发稳健的数据密集型深度学习 SLR 模型至关重要。在本文中,我们介绍了希腊手语(GSL)的多手语视频语料库,这是迄今为止最大的希腊手语数据库,是 SLR 研究的宝贵资源。该语料库由大量 RGB+D 视频组成,以多模态方式传递丰富的词汇内容,包括三个子集:(i) 孤立手势;(ii) 连续手势;(iii) 连续字母指拼单词。此外,我们还引入了一个综合实验装置,为更准确、更稳健的 SLR 解决方案铺平了道路。特别是,除了多签名者(MS)和独立签名者(SI)设置外,我们还采用了签名者适应(SA)实验范例,便于在各种情况下全面评估系统性能。此外,我们还提供了针对孤立符号、连续签名和连续指拼的三种基线 SLR 系统。这些系统利用深度学习和序列建模的前沿方法捕捉手势中固有的复杂时间动态。这些模型在三个语料子集上进行了评估,设定了最先进的识别基准。SL-ReDu GSL 语料库,包括其推荐的实验框架,可在 https://sl-redu.e-ce.uth.gr/corpus 上公开获取。
{"title":"A large corpus for the recognition of Greek Sign Language gestures","authors":"Katerina Papadimitriou ,&nbsp;Galini Sapountzaki ,&nbsp;Kyriaki Vasilaki ,&nbsp;Eleni Efthimiou ,&nbsp;Stavroula-Evita Fotinea ,&nbsp;Gerasimos Potamianos","doi":"10.1016/j.cviu.2024.104212","DOIUrl":"10.1016/j.cviu.2024.104212","url":null,"abstract":"<div><div>Sign language recognition (SLR) from videos constitutes a captivating problem in gesture recognition, requiring the interpretation of hand movements, facial expressions, and body postures. The complexity of sign formation, signing variability among signers, and the technical hurdles of visual detection and tracking render SLR a challenging task. At the same time, the scarcity of large-scale SLR datasets, which are critical for developing robust data-intensive deep-learning SLR models, exacerbates these issues. In this article, we introduce a multi-signer video corpus of Greek Sign Language (GSL), which is the largest GSL database to date, serving as a valuable resource for SLR research. This corpus comprises an extensive RGB+D video collection that conveys rich lexical content in a multi-modal fashion, encompassing three subsets: (i) isolated signs; (ii) continuous signing; and (iii) continuous alphabet fingerspelling of words. Moreover, we introduce a comprehensive experimental setup that paves the way for more accurate and robust SLR solutions. In particular, except for the multi-signer (MS) and signer-independent (SI) settings, we employ a signer-adapted (SA) experimental paradigm, facilitating a comprehensive evaluation of system performance across various scenarios. Further, we provide three baseline SLR systems for isolated signs, continuous signing, and continuous fingerspelling. These systems leverage cutting-edge methods in deep learning and sequence modeling to capture the intricate temporal dynamics inherent in sign gestures. The models are evaluated on the three corpus subsets, setting their state-of-the-art recognition benchmark. The SL-ReDu GSL corpus, including its recommended experimental frameworks, is publicly available at <span><span>https://sl-redu.e-ce.uth.gr/corpus</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104212"},"PeriodicalIF":4.3,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image compressive sensing reconstruction via nonlocal low-rank residual-based ADMM framework 通过基于非局部低阶残差的 ADMM 框架进行图像压缩传感重建
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1016/j.cviu.2024.104204
Junhao Zhang , Kim-Hui Yap , Lap-Pui Chau , Ce Zhu
The nonlocal low-rank (LR) modeling has proven to be an effective approach in image compressive sensing (CS) reconstruction, which starts by clustering similar patches using the nonlocal self-similarity (NSS) prior into nonlocal image group and then imposes an LR penalty on each nonlocal image group. However, most existing methods only approximate the LR matrix directly from the degraded nonlocal image group, which may lead to suboptimal LR matrix approximation and thus obtain unsatisfactory reconstruction results. In this paper, we propose a novel nonlocal low-rank residual (NLRR) approach for image CS reconstruction, which progressively approximates the underlying LR matrix by minimizing the LR residual. To do this, we first use the NSS prior to obtaining a good estimate of the original nonlocal image group, and then the LR residual between the degraded nonlocal image group and the estimated nonlocal image group is minimized to derive a more accurate LR matrix. To ensure the optimization is both feasible and reliable, we employ an alternative direction multiplier method (ADMM) to solve the NLRR-based image CS reconstruction problem. Our experimental results show that the proposed NLRR algorithm achieves superior performance against many popular or state-of-the-art image CS reconstruction methods, both in objective metrics and subjective perceptual quality.
非局部低阶(LR)建模已被证明是图像压缩传感(CS)重建中的一种有效方法,它首先利用非局部自相似性(NSS)先验将相似斑块聚类为非局部图像组,然后对每个非局部图像组施加 LR 惩罚。然而,大多数现有方法只是直接从退化的非局部图像组近似 LR 矩阵,这可能会导致 LR 矩阵近似效果不理想,从而得到不尽人意的重建结果。在本文中,我们提出了一种用于图像 CS 重建的新型非局部低阶残差(NLRR)方法,该方法通过最小化 LR 残差逐步逼近底层 LR 矩阵。为此,我们首先使用 NSS 先验法获得原始非本地图像组的良好估计值,然后最小化退化的非本地图像组和估计的非本地图像组之间的 LR 残差,从而得出更精确的 LR 矩阵。为确保优化的可行性和可靠性,我们采用了另一种方向乘法(ADMM)来解决基于 NLRR 的图像 CS 重建问题。我们的实验结果表明,与许多流行的或最先进的图像 CS 重建方法相比,所提出的 NLRR 算法在客观指标和主观感知质量方面都取得了优异的性能。
{"title":"Image compressive sensing reconstruction via nonlocal low-rank residual-based ADMM framework","authors":"Junhao Zhang ,&nbsp;Kim-Hui Yap ,&nbsp;Lap-Pui Chau ,&nbsp;Ce Zhu","doi":"10.1016/j.cviu.2024.104204","DOIUrl":"10.1016/j.cviu.2024.104204","url":null,"abstract":"<div><div>The nonlocal low-rank (LR) modeling has proven to be an effective approach in image compressive sensing (CS) reconstruction, which starts by clustering similar patches using the nonlocal self-similarity (NSS) prior into nonlocal image group and then imposes an LR penalty on each nonlocal image group. However, most existing methods only approximate the LR matrix directly from the degraded nonlocal image group, which may lead to suboptimal LR matrix approximation and thus obtain unsatisfactory reconstruction results. In this paper, we propose a novel nonlocal low-rank residual (NLRR) approach for image CS reconstruction, which progressively approximates the underlying LR matrix by minimizing the LR residual. To do this, we first use the NSS prior to obtaining a good estimate of the original nonlocal image group, and then the LR residual between the degraded nonlocal image group and the estimated nonlocal image group is minimized to derive a more accurate LR matrix. To ensure the optimization is both feasible and reliable, we employ an alternative direction multiplier method (ADMM) to solve the NLRR-based image CS reconstruction problem. Our experimental results show that the proposed NLRR algorithm achieves superior performance against many popular or state-of-the-art image CS reconstruction methods, both in objective metrics and subjective perceptual quality.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104204"},"PeriodicalIF":4.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A MLP architecture fusing RGB and CASSI for computational spectral imaging 融合 RGB 和 CASSI 的 MLP 架构,用于计算光谱成像
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-25 DOI: 10.1016/j.cviu.2024.104214
Zeyu Cai , Ru Hong , Xun Lin , Jiming Yang , YouLiang Ni , Zhen Liu , Chengqian Jin , Feipeng Da
The coded Aperture Snapshot Spectral Imaging (CASSI) system offers significant advantages in dynamically acquiring hyper-spectral images compared to traditional measurement methods. However, it faces the following challenges: (1) Traditional masks rely on random patterns or analytical design, limiting CASSI’s performance improvement. (2) Existing CASSI reconstruction algorithms do not fully utilize RGB information. (3) High-quality reconstruction algorithms are often slow and limited to offline scene reconstruction. To address these issues, this paper proposes a new MLP architecture, Spectral–Spatial MLP (SSMLP), which replaces the transformer structure with a network using CASSI measurements and RGB as multimodal inputs. This maintains reconstruction quality while significantly improving reconstruction speed. Additionally, we constructed a teacher-student network (SSMLP with a teacher, SSMLP-WT) to transfer the knowledge learned from a large model to a smaller network, further enhancing the smaller network’s accuracy. Extensive experiments show that SSMLP matches the performance of transformer-based structures in spectral image reconstruction while improving inference speed by at least 50%. The reconstruction quality of SSMLP-WT is further improved by knowledge transfer without changing the network, and the teacher boosts the performance by 0.92 dB (44.73 dB vs. 43.81 dB).
与传统测量方法相比,编码孔径快照光谱成像(CASSI)系统在动态获取高光谱图像方面具有显著优势。然而,它也面临着以下挑战:(1)传统的掩膜依赖于随机模式或分析设计,限制了 CASSI 性能的提高。(2) 现有的 CASSI 重建算法没有充分利用 RGB 信息。(3) 高质量的重建算法通常速度较慢,且仅限于离线场景重建。为了解决这些问题,本文提出了一种新的 MLP 架构--光谱空间 MLP (SSMLP),它利用 CASSI 测量和 RGB 作为多模态输入,用一个网络取代了变压器结构。这既保持了重建质量,又大大提高了重建速度。此外,我们还构建了一个师生网络(SSMLP with a teacher,SSMLP-WT),将从大型模型中学到的知识转移到小型网络中,进一步提高了小型网络的准确性。大量实验表明,SSMLP 在光谱图像重建方面的性能与基于变压器的结构相当,同时推理速度至少提高了 50%。在不改变网络的情况下,通过知识转移,SSMLP-WT 的重建质量得到了进一步提高,教师将其性能提高了 0.92 dB(44.73 dB 对 43.81 dB)。
{"title":"A MLP architecture fusing RGB and CASSI for computational spectral imaging","authors":"Zeyu Cai ,&nbsp;Ru Hong ,&nbsp;Xun Lin ,&nbsp;Jiming Yang ,&nbsp;YouLiang Ni ,&nbsp;Zhen Liu ,&nbsp;Chengqian Jin ,&nbsp;Feipeng Da","doi":"10.1016/j.cviu.2024.104214","DOIUrl":"10.1016/j.cviu.2024.104214","url":null,"abstract":"<div><div>The coded Aperture Snapshot Spectral Imaging (CASSI) system offers significant advantages in dynamically acquiring hyper-spectral images compared to traditional measurement methods. However, it faces the following challenges: (1) Traditional masks rely on random patterns or analytical design, limiting CASSI’s performance improvement. (2) Existing CASSI reconstruction algorithms do not fully utilize RGB information. (3) High-quality reconstruction algorithms are often slow and limited to offline scene reconstruction. To address these issues, this paper proposes a new MLP architecture, Spectral–Spatial MLP (SSMLP), which replaces the transformer structure with a network using CASSI measurements and RGB as multimodal inputs. This maintains reconstruction quality while significantly improving reconstruction speed. Additionally, we constructed a teacher-student network (SSMLP with a teacher, SSMLP-WT) to transfer the knowledge learned from a large model to a smaller network, further enhancing the smaller network’s accuracy. Extensive experiments show that SSMLP matches the performance of transformer-based structures in spectral image reconstruction while improving inference speed by at least 50%. The reconstruction quality of SSMLP-WT is further improved by knowledge transfer without changing the network, and the teacher boosts the performance by 0.92 dB (44.73 dB vs. 43.81 dB).</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104214"},"PeriodicalIF":4.3,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A GCN and Transformer complementary network for skeleton-based action recognition 用于基于骨骼的动作识别的 GCN 和 Transformer 互补网络
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-22 DOI: 10.1016/j.cviu.2024.104213
Xuezhi Xiang , Xiaoheng Li , Xuzhao Liu , Yulong Qiao , Abdulmotaleb El Saddik
Graph Convolution Networks (GCNs) have been widely used in skeleton-based action recognition. Although there are significant progress, the inherent limitation still lies in the restricted receptive field of GCN, hindering its ability to extract global dependencies effectively. And the joints that are structurally separated can also have strong correlation. Previous works rarely explore local and global correlations of joints, leading to insufficiently model the complex dynamics of skeleton sequences. To address this issue, we propose a GCN and Transformer complementary network (GTC-Net) that allows parallel communications between GCN and Transformer domains. Specifically, we introduce a graph convolution and self-attention combined module (GAM), which can effectively leverage the complementarity of GCN and self-attention to perceive local and global dependencies of joints for the human body. Furthermore, in order to address the problems of long-term sequence ordering and position detection, we design a position-aware module (PAM), which can explicitly capture the ordering information and unique identity information for body joints of skeleton sequence. Extensive experiments on NTU RGB+D 60 and NTU RGB+D 120 datasets are conducted to evaluate our proposed method. The results demonstrate that our method can achieve competitive results on both datasets.
图卷积网络(GCN)已被广泛应用于基于骨骼的动作识别。虽然取得了重大进展,但其固有的局限性仍然在于 GCN 的感受野有限,阻碍了其有效提取全局依赖关系的能力。而结构上分离的关节也可能具有很强的相关性。以往的研究很少探讨关节的局部和全局相关性,导致无法充分模拟骨架序列的复杂动态。为解决这一问题,我们提出了一种 GCN 和变换器互补网络(GTC-Net),允许 GCN 和变换器域之间并行通信。具体来说,我们引入了图卷积和自注意组合模块(GAM),它能有效利用 GCN 和自注意的互补性来感知人体关节的局部和全局依赖关系。此外,针对长期序列排序和位置检测的问题,我们设计了位置感知模块(PAM),该模块可以明确捕捉骨架序列中身体关节的排序信息和唯一标识信息。我们在 NTU RGB+D 60 和 NTU RGB+D 120 数据集上进行了大量实验,以评估我们提出的方法。结果表明,我们的方法在这两个数据集上都能取得具有竞争力的结果。
{"title":"A GCN and Transformer complementary network for skeleton-based action recognition","authors":"Xuezhi Xiang ,&nbsp;Xiaoheng Li ,&nbsp;Xuzhao Liu ,&nbsp;Yulong Qiao ,&nbsp;Abdulmotaleb El Saddik","doi":"10.1016/j.cviu.2024.104213","DOIUrl":"10.1016/j.cviu.2024.104213","url":null,"abstract":"<div><div>Graph Convolution Networks (GCNs) have been widely used in skeleton-based action recognition. Although there are significant progress, the inherent limitation still lies in the restricted receptive field of GCN, hindering its ability to extract global dependencies effectively. And the joints that are structurally separated can also have strong correlation. Previous works rarely explore local and global correlations of joints, leading to insufficiently model the complex dynamics of skeleton sequences. To address this issue, we propose a GCN and Transformer complementary network (GTC-Net) that allows parallel communications between GCN and Transformer domains. Specifically, we introduce a graph convolution and self-attention combined module (GAM), which can effectively leverage the complementarity of GCN and self-attention to perceive local and global dependencies of joints for the human body. Furthermore, in order to address the problems of long-term sequence ordering and position detection, we design a position-aware module (PAM), which can explicitly capture the ordering information and unique identity information for body joints of skeleton sequence. Extensive experiments on NTU RGB+D 60 and NTU RGB+D 120 datasets are conducted to evaluate our proposed method. The results demonstrate that our method can achieve competitive results on both datasets.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104213"},"PeriodicalIF":4.3,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142528589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reverse Stable Diffusion: What prompt was used to generate this image? 反向稳定扩散:生成这张图片时使用了什么提示?
IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-19 DOI: 10.1016/j.cviu.2024.104210
Florinel-Alin Croitoru , Vlad Hondru , Radu Tudor Ionescu , Mubarak Shah
Text-to-image diffusion models have recently attracted the interest of many researchers, and inverting the diffusion process can play an important role in better understanding the generative process and how to engineer prompts in order to obtain the desired images. To this end, we study the task of predicting the prompt embedding given an image generated by a generative diffusion model. We consider a series of white-box and black-box models (with and without access to the weights of the diffusion network) to deal with the proposed task. We propose a novel learning framework comprising a joint prompt regression and multi-label vocabulary classification objective that generates improved prompts. To further improve our method, we employ a curriculum learning procedure that promotes the learning of image-prompt pairs with lower labeling noise (i.e. that are better aligned). We conduct experiments on the DiffusionDB data set, predicting text prompts from images generated by Stable Diffusion. In addition, we make an interesting discovery: training a diffusion model on the prompt generation task can make the model generate images that are much better aligned with the input prompts, when the model is directly reused for text-to-image generation. Our code is publicly available for download at https://github.com/CroitoruAlin/Reverse-Stable-Diffusion.
文本到图像的扩散模型最近引起了许多研究人员的兴趣,而反转扩散过程对于更好地理解生成过程以及如何设计提示以获得所需的图像具有重要作用。为此,我们研究了在生成扩散模型生成图像的情况下预测提示嵌入的任务。我们考虑了一系列白盒模型和黑盒模型(可访问和不可访问扩散网络的权重)来处理所提出的任务。我们提出了一个新颖的学习框架,其中包括一个联合提示回归和多标签词汇分类目标,可生成改进的提示。为了进一步改进我们的方法,我们采用了一种课程学习程序,以促进学习具有较低标签噪声(即更好地对齐)的图像-提示对。我们在 DiffusionDB 数据集上进行了实验,从稳定扩散法生成的图像中预测文本提示。此外,我们还发现了一个有趣的现象:当模型直接用于文本到图像的生成时,在提示生成任务上训练扩散模型可以使模型生成的图像与输入提示更好地对齐。我们的代码可在 https://github.com/CroitoruAlin/Reverse-Stable-Diffusion 上公开下载。
{"title":"Reverse Stable Diffusion: What prompt was used to generate this image?","authors":"Florinel-Alin Croitoru ,&nbsp;Vlad Hondru ,&nbsp;Radu Tudor Ionescu ,&nbsp;Mubarak Shah","doi":"10.1016/j.cviu.2024.104210","DOIUrl":"10.1016/j.cviu.2024.104210","url":null,"abstract":"<div><div>Text-to-image diffusion models have recently attracted the interest of many researchers, and inverting the diffusion process can play an important role in better understanding the generative process and how to engineer prompts in order to obtain the desired images. To this end, we study the task of predicting the prompt embedding given an image generated by a generative diffusion model. We consider a series of white-box and black-box models (with and without access to the weights of the diffusion network) to deal with the proposed task. We propose a novel learning framework comprising a joint prompt regression and multi-label vocabulary classification objective that generates improved prompts. To further improve our method, we employ a curriculum learning procedure that promotes the learning of image-prompt pairs with lower labeling noise (<em>i</em>.<em>e</em>. that are better aligned). We conduct experiments on the DiffusionDB data set, predicting text prompts from images generated by Stable Diffusion. In addition, we make an interesting discovery: training a diffusion model on the prompt generation task can make the model generate images that are much better aligned with the input prompts, when the model is directly reused for text-to-image generation. Our code is publicly available for download at <span><span>https://github.com/CroitoruAlin/Reverse-Stable-Diffusion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104210"},"PeriodicalIF":4.3,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142528466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Vision and Image Understanding
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1