首页 > 最新文献

Computational Visual Media最新文献

英文 中文
Multi3D: 3D-aware multimodal image synthesis Multi3D:三维感知多模态图像合成
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-04-03 DOI: 10.1007/s41095-024-0422-4

Abstract

3D-aware image synthesis has attained high quality and robust 3D consistency. Existing 3D controllable generative models are designed to synthesize 3D-aware images through a single modality, such as 2D segmentation or sketches, but lack the ability to finely control generated content, such as texture and age. In pursuit of enhancing user-guided controllability, we propose Multi3D, a 3D-aware controllable image synthesis model that supports multi-modal input. Our model can govern the geometry of the generated image using a 2D label map, such as a segmentation or sketch map, while concurrently regulating the appearance of the generated image through a textual description. To demonstrate the effectiveness of our method, we have conducted experiments on multiple datasets, including CelebAMask-HQ, AFHQ-cat, and shapenet-car. Qualitative and quantitative evaluations show that our method outperforms existing state-of-the-art methods.

摘要 三维感知图像合成已经实现了高质量和稳健的三维一致性。现有的三维可控生成模型旨在通过二维分割或草图等单一方式合成三维感知图像,但缺乏精细控制生成内容(如纹理和年龄)的能力。为了提高用户引导的可控性,我们提出了支持多模态输入的 3D 感知可控图像合成模型 Multi3D。我们的模型可以使用二维标签图(如分割图或草图)来控制生成图像的几何形状,同时通过文字描述来控制生成图像的外观。为了证明我们方法的有效性,我们在多个数据集上进行了实验,包括 CelebAMask-HQ、AFHQ-cat 和 shapenet-car。定性和定量评估结果表明,我们的方法优于现有的最先进方法。
{"title":"Multi3D: 3D-aware multimodal image synthesis","authors":"","doi":"10.1007/s41095-024-0422-4","DOIUrl":"https://doi.org/10.1007/s41095-024-0422-4","url":null,"abstract":"<h3>Abstract</h3> <p>3D-aware image synthesis has attained high quality and robust 3D consistency. Existing 3D controllable generative models are designed to synthesize 3D-aware images through a single modality, such as 2D segmentation or sketches, but lack the ability to finely control generated content, such as texture and age. In pursuit of enhancing user-guided controllability, we propose Multi3D, a 3D-aware controllable image synthesis model that supports multi-modal input. Our model can govern the geometry of the generated image using a 2D label map, such as a segmentation or sketch map, while concurrently regulating the appearance of the generated image through a textual description. To demonstrate the effectiveness of our method, we have conducted experiments on multiple datasets, including CelebAMask-HQ, AFHQ-cat, and shapenet-car. Qualitative and quantitative evaluations show that our method outperforms existing state-of-the-art methods. <span> <span> <img alt=\"\" src=\"https://static-content.springer.com/image/MediaObjects/41095_2024_422_Fig1_HTML.jpg\"/> </span> </span></p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active self-training for weakly supervised 3D scene semantic segmentation 弱监督三维场景语义分割的主动自我训练
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-22 DOI: 10.1007/s41095-022-0311-7
Gengxin Liu, Oliver van Kaick, Hui Huang, Ruizhen Hu

Since the preparation of labeled data for training semantic segmentation networks of point clouds is a time-consuming process, weakly supervised approaches have been introduced to learn from only a small fraction of data. These methods are typically based on learning with contrastive losses while automatically deriving per-point pseudo-labels from a sparse set of user-annotated labels. In this paper, our key observation is that the selection of which samples to annotate is as important as how these samples are used for training. Thus, we introduce a method for weakly supervised segmentation of 3D scenes that combines self-training with active learning. Active learning selects points for annotation that are likely to result in improvements to the trained model, while self-training makes efficient use of the user-provided labels for learning the model. We demonstrate that our approach leads to an effective method that provides improvements in scene segmentation over previous work and baselines, while requiring only a few user annotations.

由于准备用于训练点云语义分割网络的标记数据是一个耗时的过程,因此引入了弱监督方法,只从一小部分数据中学习。这些方法通常基于对比损失学习,同时从稀疏的用户注释标签集中自动推导出每个点的伪标签。在本文中,我们的主要观点是,选择注释哪些样本与如何使用这些样本进行训练同样重要。因此,我们引入了一种结合自我训练和主动学习的弱监督三维场景分割方法。主动学习选择有可能改进训练模型的点进行标注,而自我训练则有效利用用户提供的标签来学习模型。我们证明,我们的方法是一种有效的方法,与以前的工作和基线相比,它在场景分割方面有所改进,同时只需要少量用户注释。
{"title":"Active self-training for weakly supervised 3D scene semantic segmentation","authors":"Gengxin Liu, Oliver van Kaick, Hui Huang, Ruizhen Hu","doi":"10.1007/s41095-022-0311-7","DOIUrl":"https://doi.org/10.1007/s41095-022-0311-7","url":null,"abstract":"<p>Since the preparation of labeled data for training semantic segmentation networks of point clouds is a time-consuming process, weakly supervised approaches have been introduced to learn from only a small fraction of data. These methods are typically based on learning with contrastive losses while automatically deriving per-point pseudo-labels from a sparse set of user-annotated labels. In this paper, our key observation is that the selection of which samples to annotate is as important as how these samples are used for training. Thus, we introduce a method for weakly supervised segmentation of 3D scenes that combines self-training with active learning. Active learning selects points for annotation that are likely to result in improvements to the trained model, while self-training makes efficient use of the user-provided labels for learning the model. We demonstrate that our approach leads to an effective method that provides improvements in scene segmentation over previous work and baselines, while requiring only a few user annotations.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140203880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Class-conditional domain adaptation for semantic segmentation 用于语义分割的类条件域适应
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-22 DOI: 10.1007/s41095-023-0362-4
Yue Wang, Yuke Li, James H. Elder, Runmin Wu, Huchuan Lu

Semantic segmentation is an important sub-task for many applications. However, pixel-level ground-truth labeling is costly, and there is a tendency to overfit to training data, thereby limiting the generalization ability. Unsupervised domain adaptation can potentially address these problems by allowing systems trained on labelled datasets from the source domain (including less expensive synthetic domain) to be adapted to a novel target domain. The conventional approach involves automatic extraction and alignment of the representations of source and target domains globally. One limitation of this approach is that it tends to neglect the differences between classes: representations of certain classes can be more easily extracted and aligned between the source and target domains than others, limiting the adaptation over all classes. Here, we address this problem by introducing a Class-Conditional Domain Adaptation (CCDA) method. This incorporates a class-conditional multi-scale discriminator and class-conditional losses for both segmentation and adaptation. Together, they measure the segmentation, shift the domain in a class-conditional manner, and equalize the loss over classes. Experimental results demonstrate that the performance of our CCDA method matches, and in some cases, surpasses that of state-of-the-art methods.

语义分割是许多应用中的重要子任务。然而,像素级地面实况标注成本高昂,而且有过度适应训练数据的趋势,从而限制了泛化能力。无监督领域适应可以解决这些问题,它允许在源领域(包括成本较低的合成领域)的标注数据集上训练的系统适应新的目标领域。传统的方法涉及源域和目标域表征的自动提取和全局对齐。这种方法的一个局限是,它往往会忽略不同类别之间的差异:某些类别的表征比其他类别更容易在源域和目标域之间提取和配准,从而限制了对所有类别的适配。在此,我们引入了一种类条件域适应(CCDA)方法来解决这一问题。该方法结合了分类条件多尺度判别器以及用于分割和适应的分类条件损失。它们共同测量分割,以类条件方式移动域,并均衡各类损失。实验结果表明,我们的 CCDA 方法与最先进的方法性能相当,在某些情况下甚至超过了它们。
{"title":"Class-conditional domain adaptation for semantic segmentation","authors":"Yue Wang, Yuke Li, James H. Elder, Runmin Wu, Huchuan Lu","doi":"10.1007/s41095-023-0362-4","DOIUrl":"https://doi.org/10.1007/s41095-023-0362-4","url":null,"abstract":"<p>Semantic segmentation is an important sub-task for many applications. However, pixel-level ground-truth labeling is costly, and there is a tendency to overfit to training data, thereby limiting the generalization ability. Unsupervised domain adaptation can potentially address these problems by allowing systems trained on labelled datasets from the source domain (including less expensive synthetic domain) to be adapted to a novel target domain. The conventional approach involves automatic extraction and alignment of the representations of source and target domains globally. One limitation of this approach is that it tends to neglect the differences between classes: representations of certain classes can be more easily extracted and aligned between the source and target domains than others, limiting the adaptation over all classes. Here, we address this problem by introducing a Class-Conditional Domain Adaptation (CCDA) method. This incorporates a class-conditional multi-scale discriminator and class-conditional losses for both segmentation and adaptation. Together, they measure the segmentation, shift the domain in a class-conditional manner, and equalize the loss over classes. Experimental results demonstrate that the performance of our CCDA method matches, and in some cases, surpasses that of state-of-the-art methods.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140203925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometry-aware 3D pose transfer using transformer autoencoder 利用变换器自动编码器实现几何感知三维姿态转移
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-22 DOI: 10.1007/s41095-023-0379-8
Shanghuan Liu, Shaoyan Gai, Feipeng Da, Fazal Waris

3D pose transfer over unorganized point clouds is a challenging generation task, which transfers a source’s pose to a target shape and keeps the target’s identity. Recent deep models have learned deformations and used the target’s identity as a style to modulate the combined features of two shapes or the aligned vertices of the source shape. However, all operations in these models are point-wise and independent and ignore the geometric information on the surface and structure of the input shapes. This disadvantage severely limits the generation and generalization capabilities. In this study, we propose a geometry-aware method based on a novel transformer autoencoder to solve this problem. An efficient self-attention mechanism, that is, cross-covariance attention, was utilized across our framework to perceive the correlations between points at different distances. Specifically, the transformer encoder extracts the target shape’s local geometry details for identity attributes and the source shape’s global geometry structure for pose information. Our transformer decoder efficiently learns deformations and recovers identity properties by fusing and decoding the extracted features in a geometry attentional manner, which does not require corresponding information or modulation steps. The experiments demonstrated that the geometry-aware method achieved state-of-the-art performance in a 3D pose transfer task. The implementation code and data are available at https://github.com/SEULSH/Geometry-Aware-3D-Pose-Transfer-Using-Transformer-Autoencoder.

在无组织点云上进行三维姿态转移是一项具有挑战性的生成任务,它需要将源姿态转移到目标形状,并保持目标的特征。最近的深度模型学习了变形,并将目标的身份作为一种样式,以调节两个形状的组合特征或源形状的对齐顶点。然而,这些模型中的所有操作都是点对点、独立的,忽略了输入形状的表面和结构的几何信息。这一缺点严重限制了生成和泛化能力。在本研究中,我们提出了一种基于新型变换器自动编码器的几何感知方法来解决这一问题。我们的框架采用了一种高效的自我关注机制,即交叉协方差关注,来感知不同距离点之间的相关性。具体来说,变换器编码器可提取目标形状的局部几何细节以获得身份属性,并提取源形状的全局几何结构以获得姿态信息。我们的变换器解码器以几何注意的方式对提取的特征进行融合和解码,从而有效地学习变形并恢复身份属性,这不需要相应的信息或调制步骤。实验证明,几何感知方法在三维姿态转移任务中取得了最先进的性能。实现代码和数据可在 https://github.com/SEULSH/Geometry-Aware-3D-Pose-Transfer-Using-Transformer-Autoencoder 上获取。
{"title":"Geometry-aware 3D pose transfer using transformer autoencoder","authors":"Shanghuan Liu, Shaoyan Gai, Feipeng Da, Fazal Waris","doi":"10.1007/s41095-023-0379-8","DOIUrl":"https://doi.org/10.1007/s41095-023-0379-8","url":null,"abstract":"<p>3D pose transfer over unorganized point clouds is a challenging generation task, which transfers a source’s pose to a target shape and keeps the target’s identity. Recent deep models have learned deformations and used the target’s identity as a style to modulate the combined features of two shapes or the aligned vertices of the source shape. However, all operations in these models are point-wise and independent and ignore the geometric information on the surface and structure of the input shapes. This disadvantage severely limits the generation and generalization capabilities. In this study, we propose a geometry-aware method based on a novel transformer autoencoder to solve this problem. An efficient self-attention mechanism, that is, cross-covariance attention, was utilized across our framework to perceive the correlations between points at different distances. Specifically, the transformer encoder extracts the target shape’s local geometry details for identity attributes and the source shape’s global geometry structure for pose information. Our transformer decoder efficiently learns deformations and recovers identity properties by fusing and decoding the extracted features in a geometry attentional manner, which does not require corresponding information or modulation steps. The experiments demonstrated that the geometry-aware method achieved state-of-the-art performance in a 3D pose transfer task. The implementation code and data are available at https://github.com/SEULSH/Geometry-Aware-3D-Pose-Transfer-Using-Transformer-Autoencoder.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140203930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale hash encoding based neural geometry representation 基于神经几何表示的多尺度哈希编码
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-22 DOI: 10.1007/s41095-023-0340-x

Abstract

Recently, neural implicit function-based representation has attracted more and more attention, and has been widely used to represent surfaces using differentiable neural networks. However, surface reconstruction from point clouds or multi-view images using existing neural geometry representations still suffer from slow computation and poor accuracy. To alleviate these issues, we propose a multi-scale hash encoding-based neural geometry representation which effectively and efficiently represents the surface as a signed distance field. Our novel neural network structure carefully combines low-frequency Fourier position encoding with multi-scale hash encoding. The initialization of the geometry network and geometry features of the rendering module are accordingly redesigned. Our experiments demonstrate that the proposed representation is at least 10 times faster for reconstructing point clouds with millions of points. It also significantly improves speed and accuracy of multi-view reconstruction. Our code and models are available at https://github.com/Dengzhi-USTC/Neural-Geometry-Reconstruction.

摘要 近年来,基于神经隐函数的表示法越来越受到关注,并被广泛用于利用可微神经网络表示曲面。然而,使用现有的神经几何表示法从点云或多视角图像中重建曲面仍然存在计算速度慢、精度低的问题。为了缓解这些问题,我们提出了一种基于多尺度哈希编码的神经几何表示法,它能有效且高效地将曲面表示为有符号的距离场。我们新颖的神经网络结构将低频傅立叶位置编码与多尺度哈希编码巧妙地结合在一起。几何网络的初始化和渲染模块的几何特征也相应进行了重新设计。我们的实验证明,在重建数百万个点的点云时,所提出的表示方法至少快 10 倍。它还大大提高了多视角重建的速度和精度。我们的代码和模型可在 https://github.com/Dengzhi-USTC/Neural-Geometry-Reconstruction 上查阅。
{"title":"Multi-scale hash encoding based neural geometry representation","authors":"","doi":"10.1007/s41095-023-0340-x","DOIUrl":"https://doi.org/10.1007/s41095-023-0340-x","url":null,"abstract":"<h3>Abstract</h3> <p>Recently, neural implicit function-based representation has attracted more and more attention, and has been widely used to represent surfaces using differentiable neural networks. However, surface reconstruction from point clouds or multi-view images using existing neural geometry representations still suffer from slow computation and poor accuracy. To alleviate these issues, we propose a multi-scale hash encoding-based neural geometry representation which effectively and efficiently represents the surface as a signed distance field. Our novel neural network structure carefully combines low-frequency Fourier position encoding with multi-scale hash encoding. The initialization of the geometry network and geometry features of the rendering module are accordingly redesigned. Our experiments demonstrate that the proposed representation is at least 10 times faster for reconstructing point clouds with millions of points. It also significantly improves speed and accuracy of multi-view reconstruction. Our code and models are available at https://github.com/Dengzhi-USTC/Neural-Geometry-Reconstruction. <span> <span> <img alt=\"\" src=\"https://static-content.springer.com/image/MediaObjects/41095_2023_340_Fig1_HTML.jpg\"/> </span> </span></p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to: Dynamic ocean inverse modeling based on differentiable rendering 勘误:基于可变渲染的动态海洋逆建模
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-03-22 DOI: 10.1007/s41095-024-0398-z
Xueguang Xie, Yang Gao, Fei Hou, Aimin Hao, Hong Qin

The authors apologize for a hidden error in the article. It is that the images in Figs. 14(a) and 14(d) were mistakenly presented as left–right mirror images. The authors have flipped them to ensure that the figures now correspond correctly with others in the subfigures (b, c, e, f). The accurate version of Fig. 14 is provided as below.

作者对文章中隐藏的错误表示歉意。图 14(a)和 14(d)中的图像被误认为是左右镜像。作者已将其翻转,以确保现在的图与子图(b、c、e、f)中的其他图正确对应。图 14 的准确版本如下。
{"title":"Erratum to: Dynamic ocean inverse modeling based on differentiable rendering","authors":"Xueguang Xie, Yang Gao, Fei Hou, Aimin Hao, Hong Qin","doi":"10.1007/s41095-024-0398-z","DOIUrl":"https://doi.org/10.1007/s41095-024-0398-z","url":null,"abstract":"<p>The authors apologize for a hidden error in the article. It is that the images in Figs. 14(a) and 14(d) were mistakenly presented as left–right mirror images. The authors have flipped them to ensure that the figures now correspond correctly with others in the subfigures (b, c, e, f). The accurate version of Fig. 14 is provided as below.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delving into high-quality SVBRDF acquisition: A new setup and method 深入研究高质量 SVBRDF 采集:新的设置和方法
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-02-09 DOI: 10.1007/s41095-023-0352-6
Chuhua Xian, Jiaxin Li, Hao Wu, Zisen Lin, Guiqing Li

In this study, we present a new and innovative framework for acquiring high-quality SVBRDF maps. Our approach addresses the limitations of the current methods and proposes a new solution. The core of our method is a simple hardware setup consisting of a consumer-level camera, LED lights, and a carefully designed network that can accurately obtain the high-quality SVBRDF properties of a nearly planar object. By capturing a flexible number of images of an object, our network uses different subnetworks to train different property maps and employs appropriate loss functions for each of them. To further enhance the quality of the maps, we improved the network structure by adding a novel skip connection that connects the encoder and decoder with global features. Through extensive experimentation using both synthetic and real-world materials, our results demonstrate that our method outperforms previous methods and produces superior results. Furthermore, our proposed setup can also be used to acquire physically based rendering maps of special materials.

在本研究中,我们提出了一个用于获取高质量 SVBRDF 地图的创新框架。我们的方法解决了现有方法的局限性,并提出了新的解决方案。我们方法的核心是一个简单的硬件设置,由消费级相机、LED 灯和精心设计的网络组成,可以准确获取近似平面物体的高质量 SVBRDF 特性。通过捕捉物体的灵活图像数量,我们的网络使用不同的子网络来训练不同的属性图,并为每个属性图使用适当的损失函数。为了进一步提高属性图的质量,我们改进了网络结构,增加了一个新颖的跳过连接,用全局特征连接编码器和解码器。通过使用合成材料和真实材料进行大量实验,我们的结果表明,我们的方法优于之前的方法,并产生了卓越的效果。此外,我们提出的设置还可用于获取基于物理的特殊材料渲染图。
{"title":"Delving into high-quality SVBRDF acquisition: A new setup and method","authors":"Chuhua Xian, Jiaxin Li, Hao Wu, Zisen Lin, Guiqing Li","doi":"10.1007/s41095-023-0352-6","DOIUrl":"https://doi.org/10.1007/s41095-023-0352-6","url":null,"abstract":"<p>In this study, we present a new and innovative framework for acquiring high-quality SVBRDF maps. Our approach addresses the limitations of the current methods and proposes a new solution. The core of our method is a simple hardware setup consisting of a consumer-level camera, LED lights, and a carefully designed network that can accurately obtain the high-quality SVBRDF properties of a nearly planar object. By capturing a flexible number of images of an object, our network uses different subnetworks to train different property maps and employs appropriate loss functions for each of them. To further enhance the quality of the maps, we improved the network structure by adding a novel skip connection that connects the encoder and decoder with global features. Through extensive experimentation using both synthetic and real-world materials, our results demonstrate that our method outperforms previous methods and produces superior results. Furthermore, our proposed setup can also be used to acquire physically based rendering maps of special materials.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CF-DAN: Facial-expression recognition based on cross-fusion dual-attention network CF-DAN:基于交叉融合双注意网络的面部表情识别
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-02-08 DOI: 10.1007/s41095-023-0369-x

Abstract

Recently, facial-expression recognition (FER) has primarily focused on images in the wild, including factors such as face occlusion and image blurring, rather than laboratory images. Complex field environments have introduced new challenges to FER. To address these challenges, this study proposes a cross-fusion dual-attention network. The network comprises three parts: (1) a cross-fusion grouped dual-attention mechanism to refine local features and obtain global information; (2) a proposed C2 activation function construction method, which is a piecewise cubic polynomial with three degrees of freedom, requiring less computation with improved flexibility and recognition abilities, which can better address slow running speeds and neuron inactivation problems; and (3) a closed-loop operation between the self-attention distillation process and residual connections to suppress redundant information and improve the generalization ability of the model. The recognition accuracies on the RAF-DB, FERPlus, and AffectNet datasets were 92.78%, 92.02%, and 63.58%, respectively. Experiments show that this model can provide more effective solutions for FER tasks.

摘要 最近,面部表情识别(FER)主要侧重于野外图像,包括人脸遮挡和图像模糊等因素,而不是实验室图像。复杂的野外环境给 FER 带来了新的挑战。为了应对这些挑战,本研究提出了一种交叉融合双注意网络。该网络由三部分组成:(1) 交叉融合分组双注意机制,用于提炼局部特征并获取全局信息;(2) 提出的 C2 激活函数构造方法,即具有三个自由度的片断三次多项式,需要的计算量更少,灵活性和识别能力更强,能较好地解决运行速度慢和神经元失活的问题;(3) 自注意提炼过程与残余连接之间的闭环操作,用于抑制冗余信息,提高模型的泛化能力。在 RAF-DB、FERPlus 和 AffectNet 数据集上的识别准确率分别为 92.78%、92.02% 和 63.58%。实验表明,该模型能为 FER 任务提供更有效的解决方案。
{"title":"CF-DAN: Facial-expression recognition based on cross-fusion dual-attention network","authors":"","doi":"10.1007/s41095-023-0369-x","DOIUrl":"https://doi.org/10.1007/s41095-023-0369-x","url":null,"abstract":"<h3>Abstract</h3> <p>Recently, facial-expression recognition (FER) has primarily focused on images in the wild, including factors such as face occlusion and image blurring, rather than laboratory images. Complex field environments have introduced new challenges to FER. To address these challenges, this study proposes a cross-fusion dual-attention network. The network comprises three parts: (1) a cross-fusion grouped dual-attention mechanism to refine local features and obtain global information; (2) a proposed <em>C</em><sup>2</sup> activation function construction method, which is a piecewise cubic polynomial with three degrees of freedom, requiring less computation with improved flexibility and recognition abilities, which can better address slow running speeds and neuron inactivation problems; and (3) a closed-loop operation between the self-attention distillation process and residual connections to suppress redundant information and improve the generalization ability of the model. The recognition accuracies on the RAF-DB, FERPlus, and AffectNet datasets were 92.78%, 92.02%, and 63.58%, respectively. Experiments show that this model can provide more effective solutions for FER tasks. <span> <span> <img alt=\"\" src=\"https://static-content.springer.com/image/MediaObjects/41095_2023_369_Fig1_HTML.jpg\"/> </span> </span></p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-task learning and joint refinement between camera localization and object detection 多任务学习以及摄像机定位和物体检测之间的联合改进
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-02-08 DOI: 10.1007/s41095-022-0319-z
Junyi Wang, Yue Qi

Visual localization and object detection both play important roles in various tasks. In many indoor application scenarios where some detected objects have fixed positions, the two techniques work closely together. However, few researchers consider these two tasks simultaneously, because of a lack of datasets and the little attention paid to such environments. In this paper, we explore multi-task network design and joint refinement of detection and localization. To address the dataset problem, we construct a medium indoor scene of an aviation exhibition hall through a semi-automatic process. The dataset provides localization and detection information, and is publicly available at https://drive.google.com/drive/folders/1U28zkuN4_I0dbzkqyIAKlAl5k9oUK0jI?usp=sharing for benchmarking localization and object detection tasks. Targeting this dataset, we have designed a multi-task network, JLDNet, based on YOLO v3, that outputs a target point cloud and object bounding boxes. For dynamic environments, the detection branch also promotes the perception of dynamics. JLDNet includes image feature learning, point feature learning, feature fusion, detection construction, and point cloud regression. Moreover, object-level bundle adjustment is used to further improve localization and detection accuracy. To test JLDNet and compare it to other methods, we have conducted experiments on 7 static scenes, our constructed dataset, and the dynamic TUM RGB-D and Bonn datasets. Our results show state-of-the-art accuracy for both tasks, and the benefit of jointly working on both tasks is demonstrated.

视觉定位和物体检测在各种任务中都发挥着重要作用。在许多室内应用场景中,一些被检测物体的位置是固定的,因此这两种技术可以密切配合。然而,由于缺乏数据集和对此类环境的关注度不高,很少有研究人员同时考虑这两项任务。在本文中,我们探讨了多任务网络设计以及检测和定位的联合改进。为了解决数据集问题,我们通过半自动流程构建了一个航空展览馆的中等室内场景。该数据集提供了定位和检测信息,可通过 https://drive.google.com/drive/folders/1U28zkuN4_I0dbzkqyIAKlAl5k9oUK0jI?usp=sharing 公开获取,用于定位和物体检测任务的基准测试。针对该数据集,我们设计了基于 YOLO v3 的多任务网络 JLDNet,该网络可输出目标点云和物体边界框。对于动态环境,检测分支还能促进动态感知。JLDNet 包括图像特征学习、点特征学习、特征融合、检测构建和点云回归。此外,还使用了对象级束调整来进一步提高定位和检测精度。为了测试 JLDNet 并将其与其他方法进行比较,我们在 7 个静态场景、我们构建的数据集以及动态 TUM RGB-D 和波恩数据集上进行了实验。我们的结果表明,这两项任务都达到了最先进的精度,同时也证明了联合完成这两项任务的优势。
{"title":"Multi-task learning and joint refinement between camera localization and object detection","authors":"Junyi Wang, Yue Qi","doi":"10.1007/s41095-022-0319-z","DOIUrl":"https://doi.org/10.1007/s41095-022-0319-z","url":null,"abstract":"<p>Visual localization and object detection both play important roles in various tasks. In many indoor application scenarios where some detected objects have fixed positions, the two techniques work closely together. However, few researchers consider these two tasks simultaneously, because of a lack of datasets and the little attention paid to such environments. In this paper, we explore multi-task network design and joint refinement of detection and localization. To address the dataset problem, we construct a medium indoor scene of an aviation exhibition hall through a semi-automatic process. The dataset provides localization and detection information, and is publicly available at https://drive.google.com/drive/folders/1U28zkuN4_I0dbzkqyIAKlAl5k9oUK0jI?usp=sharing for benchmarking localization and object detection tasks. Targeting this dataset, we have designed a multi-task network, JLDNet, based on YOLO v3, that outputs a target point cloud and object bounding boxes. For dynamic environments, the detection branch also promotes the perception of dynamics. JLDNet includes image feature learning, point feature learning, feature fusion, detection construction, and point cloud regression. Moreover, object-level bundle adjustment is used to further improve localization and detection accuracy. To test JLDNet and compare it to other methods, we have conducted experiments on 7 static scenes, our constructed dataset, and the dynamic TUM RGB-D and Bonn datasets. Our results show state-of-the-art accuracy for both tasks, and the benefit of jointly working on both tasks is demonstrated.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139766066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DualSmoke: Sketch-based smoke illustration design with two-stage generative model DualSmoke:基于草图的烟雾插图设计与两阶段生成模型
IF 6.9 3区 计算机科学 Q1 Computer Science Pub Date : 2024-02-08 DOI: 10.1007/s41095-022-0318-0
Haoran Xie, Keisuke Arihara, Syuhei Sato, Kazunori Miyata

The dynamic effects of smoke are impressive in illustration design, but it is a troublesome and challenging issue for inexpert users to design smoke effects without domain knowledge of fluid simulations. In this work, we propose DualSmoke, a two-stage global-to-local generation framework for interactive smoke illustration design. In the global stage, the proposed approach utilizes fluid patterns to generate Lagrangian coherent structures from the user’s hand-drawn sketches. In the local stage, detailed flow patterns are obtained from the generated coherent structure. Finally, we apply a guiding force field to the smoke simulator to produce the desired smoke illustration. To construct the training dataset, DualSmoke generates flow patterns using finite-time Lyapunov exponents of the velocity fields. The synthetic sketch data are generated from the flow patterns by skeleton extraction. Our user study verifies that the proposed design interface can provide various smoke illustration designs with good user usability. Our code is available at https://githubcom/shasph/DualSmoke.

烟雾的动态效果在插图设计中给人留下深刻印象,但对于不熟悉流体模拟领域知识的用户来说,设计烟雾效果是一个麻烦且具有挑战性的问题。在这项工作中,我们为交互式烟雾插图设计提出了一个从全局到局部的两阶段生成框架--DualSmoke。在全局阶段,建议的方法利用流体模式从用户的手绘草图生成拉格朗日相干结构。在局部阶段,从生成的相干结构中获得详细的流动模式。最后,我们将引导力场应用于烟雾模拟器,生成所需的烟雾插图。为了构建训练数据集,DualSmoke 使用速度场的有限时间 Lyapunov 指数生成流动模式。合成草图数据通过骨架提取从流动模式中生成。我们的用户研究验证了所提出的设计界面可以提供各种烟雾插图设计,具有良好的用户可用性。我们的代码见 https://githubcom/shasph/DualSmoke。
{"title":"DualSmoke: Sketch-based smoke illustration design with two-stage generative model","authors":"Haoran Xie, Keisuke Arihara, Syuhei Sato, Kazunori Miyata","doi":"10.1007/s41095-022-0318-0","DOIUrl":"https://doi.org/10.1007/s41095-022-0318-0","url":null,"abstract":"<p>The dynamic effects of smoke are impressive in illustration design, but it is a troublesome and challenging issue for inexpert users to design smoke effects without domain knowledge of fluid simulations. In this work, we propose DualSmoke, a two-stage global-to-local generation framework for interactive smoke illustration design. In the global stage, the proposed approach utilizes fluid patterns to generate Lagrangian coherent structures from the user’s hand-drawn sketches. In the local stage, detailed flow patterns are obtained from the generated coherent structure. Finally, we apply a guiding force field to the smoke simulator to produce the desired smoke illustration. To construct the training dataset, DualSmoke generates flow patterns using finite-time Lyapunov exponents of the velocity fields. The synthetic sketch data are generated from the flow patterns by skeleton extraction. Our user study verifies that the proposed design interface can provide various smoke illustration designs with good user usability. Our code is available at https://githubcom/shasph/DualSmoke.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139766088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Visual Media
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1