首页 > 最新文献

Graphical Models最新文献

英文 中文
Learning a shared deformation space for efficient design-preserving garment transfer 学习共享变形空间,实现高效保设计服装转印
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-05-01 DOI: 10.1016/j.gmod.2021.101106
Min Shi , Yukun Wei , Lan Chen , Dengming Zhu , Tianlu Mao , Zhaoqi Wang

Garment transfer from a source mannequin to a shape-varying individual is a vital technique in computer graphics. Existing garment transfer methods are either time consuming or lack designed details especially for clothing with complex styles. In this paper, we propose a data-driven approach to efficiently transfer garments between two distinctive bodies while preserving the source design. Given two sets of simulated garments on a source body and a target body, we utilize the deformation gradients as the representation. Since garments in our dataset are with various topologies, we embed cloth deformation to the body. For garment transfer, the deformation is decomposed into two aspects, typically style and shape. An encoder-decoder network is proposed to learn a shared space which is invariant to garment style but related to the deformation of human bodies. For a new garment in a different style worn by the source human, our method can efficiently transfer it to the target body with the shared shape deformation, meanwhile preserving the designed details. We qualitatively and quantitatively evaluate our method on a diverse set of 3D garments that showcase rich wrinkling patterns. Experiments show that the transferred garments can preserve the source design even if the target body is quite different from the source one.

在计算机图形学中,服装从原始人体模型转移到不同形状的个体是一项至关重要的技术。现有的服装转印方法要么耗时,要么缺乏设计细节,特别是对于款式复杂的服装。在本文中,我们提出了一种数据驱动的方法,在保留原始设计的同时,在两个不同的身体之间有效地转移服装。给定源体和目标体上的两套模拟服装,我们利用变形梯度作为表示。由于我们数据集中的服装具有各种拓扑结构,因此我们将布料变形嵌入到身体中。对于服装转移,变形分为两个方面,典型的是风格和形状。提出了一种编码器-解码器网络来学习与服装款式不相关但与人体变形有关的共享空间。对于源人体所穿的不同风格的新服装,我们的方法可以有效地将其以共享的形状变形传递到目标人体,同时保留设计的细节。我们定性和定量评估我们的方法上的一套不同的3D服装,展示丰富的褶皱模式。实验表明,即使目标体与原体差异较大,转移后的服装仍能保持原设计。
{"title":"Learning a shared deformation space for efficient design-preserving garment transfer","authors":"Min Shi ,&nbsp;Yukun Wei ,&nbsp;Lan Chen ,&nbsp;Dengming Zhu ,&nbsp;Tianlu Mao ,&nbsp;Zhaoqi Wang","doi":"10.1016/j.gmod.2021.101106","DOIUrl":"10.1016/j.gmod.2021.101106","url":null,"abstract":"<div><p>Garment transfer from a source mannequin to a shape-varying individual is a vital technique in computer graphics. Existing garment transfer methods are either time consuming or lack designed details especially for clothing with complex styles. In this paper, we propose a data-driven approach to efficiently transfer garments between two distinctive bodies while preserving the source design. Given two sets of simulated garments on a source body and a target body, we utilize the deformation gradients as the representation. Since garments in our dataset are with various topologies, we embed cloth deformation to the body. For garment transfer, the deformation is decomposed into two aspects, typically style and shape. An encoder-decoder network is proposed to learn a shared space which is invariant to garment style but related to the deformation of human bodies. For a new garment in a different style worn by the source human, our method can efficiently transfer it to the target body with the shared shape deformation, meanwhile preserving the designed details. We qualitatively and quantitatively evaluate our method on a diverse set of 3D garments that showcase rich wrinkling patterns. Experiments show that the transferred garments can preserve the source design even if the target body is quite different from the source one.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101106"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101106","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84648325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Landmark Detection and 3D Face Reconstruction for Caricature using a Nonlinear Parametric Model 基于非线性参数模型的漫画地标检测与三维人脸重建
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-05-01 DOI: 10.1016/j.gmod.2021.101103
Hongrui Cai, Yudong Guo, Zhuang Peng, Juyong Zhang

Caricature is an artistic abstraction of the human face by distorting or exaggerating certain facial features, while still retains a likeness with the given face. Due to the large diversity of geometric and texture variations, automatic landmark detection and 3D face reconstruction for caricature is a challenging problem and has rarely been studied before. In this paper, we propose the first automatic method for this task by a novel 3D approach. To this end, we first build a dataset with various styles of 2D caricatures and their corresponding 3D shapes, and then build a parametric model on vertex based deformation space for 3D caricature face. Based on the constructed dataset and the nonlinear parametric model, we propose a neural network based method to regress the 3D face shape and orientation from the input 2D caricature image. Ablation studies and comparison with state-of-the-art methods demonstrate the effectiveness of our algorithm design. Extensive experimental results demonstrate that our method works well for various caricatures. Our constructed dataset, source code and trained model are available at https://github.com/Juyong/CaricatureFace.

漫画是一种通过扭曲或夸张某些面部特征来对人脸进行艺术抽象,同时仍然保持与给定面部的相似性。由于漫画的几何和纹理变化的多样性,自动地标检测和三维人脸重建是一个具有挑战性的问题,以前很少有研究。在本文中,我们通过一种新颖的3D方法提出了该任务的第一种自动方法。为此,我们首先建立了包含各种风格的2D漫画及其对应的3D形状的数据集,然后在基于顶点的变形空间上建立了三维漫画脸的参数化模型。在构建的数据集和非线性参数模型的基础上,提出了一种基于神经网络的三维人脸形状和方向回归方法。消融研究和与最先进的方法的比较证明了我们的算法设计的有效性。大量的实验结果表明,我们的方法适用于各种漫画。我们构建的数据集、源代码和训练模型可在https://github.com/Juyong/CaricatureFace上获得。
{"title":"Landmark Detection and 3D Face Reconstruction for Caricature using a Nonlinear Parametric Model","authors":"Hongrui Cai,&nbsp;Yudong Guo,&nbsp;Zhuang Peng,&nbsp;Juyong Zhang","doi":"10.1016/j.gmod.2021.101103","DOIUrl":"10.1016/j.gmod.2021.101103","url":null,"abstract":"<div><p><span><span>Caricature is an artistic abstraction of the human face by distorting or exaggerating certain facial features, while still retains a likeness with the given face. Due to the large diversity of geometric and texture variations, automatic landmark detection and 3D face reconstruction for caricature is a challenging problem and has rarely been studied before. In this paper, we propose the first automatic method for this task by a novel 3D approach. To this end, we first build a dataset with various styles of 2D caricatures and their corresponding </span>3D shapes<span><span>, and then build a parametric model on vertex based deformation space for 3D caricature face. Based on the constructed dataset and the nonlinear parametric model, we propose a </span>neural network<span> based method to regress the 3D face shape and orientation from the input 2D caricature image. Ablation studies and comparison with state-of-the-art methods demonstrate the effectiveness of our algorithm design. Extensive experimental results demonstrate that our method works well for various caricatures. Our constructed dataset, source code and trained model are available at </span></span></span><span>https://github.com/Juyong/CaricatureFace</span><svg><path></path></svg>.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101103"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74961659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
BPA-GAN: Human motion transfer using body-part-aware generative adversarial networks BPA-GAN:使用身体部位感知生成对抗网络的人体运动转移
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-05-01 DOI: 10.1016/j.gmod.2021.101107
Jinfeng Jiang , Guiqing Li , Shihao Wu , Huiqian Zhang , Yongwei Nie

Human motion transfer has many applications in human behavior analysis, training data augmentation, and personalization in mixed reality. We propose a Body-Parts-Aware Generative Adversarial Network (BPA-GAN) for image-based human motion transfer. Our key idea is to take advantage of the human body with segmented parts instead of using the human skeleton like most of existing methods to encode the human motion information. As a result, we improve the reconstruction quality, the training efficiency, and the temporal consistency via training multiple GANs in a local-to-global manner and adding regularization on the source motion. Extensive experiments show that our method outperforms the baseline and the state-of-the-art techniques in preserving the details of body parts.

人体运动迁移在混合现实中的人类行为分析、训练数据增强和个性化等方面有着广泛的应用。我们提出了一种基于图像的人体运动传输的身体部位感知生成对抗网络(BPA-GAN)。我们的关键思想是利用人体的分割部分,而不是像大多数现有的方法那样使用人体骨骼来编码人体运动信息。因此,我们通过局部到全局的方式训练多个gan,并在源运动上加入正则化,提高了重建质量、训练效率和时间一致性。大量的实验表明,我们的方法在保留身体部位细节方面优于基线和最先进的技术。
{"title":"BPA-GAN: Human motion transfer using body-part-aware generative adversarial networks","authors":"Jinfeng Jiang ,&nbsp;Guiqing Li ,&nbsp;Shihao Wu ,&nbsp;Huiqian Zhang ,&nbsp;Yongwei Nie","doi":"10.1016/j.gmod.2021.101107","DOIUrl":"10.1016/j.gmod.2021.101107","url":null,"abstract":"<div><p>Human motion<span><span><span> transfer has many applications in human behavior analysis, training data augmentation, and personalization in mixed reality. We propose a Body-Parts-Aware </span>Generative Adversarial Network (BPA-GAN) for image-based human motion transfer. Our key idea is to take advantage of the human body with segmented parts instead of using the human skeleton like most of existing methods to encode the human motion information. As a result, we improve the reconstruction quality, the training efficiency, and the temporal consistency via training multiple GANs in a local-to-global manner and adding </span>regularization on the source motion. Extensive experiments show that our method outperforms the baseline and the state-of-the-art techniques in preserving the details of body parts.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101107"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74511771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Heterogeneous porous scaffold generation using trivariate B-spline solids and triply periodic minimal surfaces 使用三变量b样条固体和三周期最小表面生成非均质多孔支架
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-05-01 DOI: 10.1016/j.gmod.2021.101105
Chuanfeng Hu , Hongwei Lin

A porous scaffold is a three-dimensional network structure composed of a large number of pores, and triply periodic minimal surfaces (TPMSs) are one of the conventional tools for designing porous scaffolds. However, discontinuity, incompleteness, and high storage space requirements are the three main shortcomings of porous scaffold design using TPMSs. In this study, we developed an effective method for heterogeneous porous scaffold generation to overcome the abovementioned shortcomings of porous scaffold design. The input of the proposed method is a trivariate B-spline solid with a cubic parametric domain. The proposed method first constructs a threshold distribution field (TDF) in the cubic parametric domain, and then produces a continuous and complete TPMS within it. Finally, by mapping the TPMS in the parametric domain to the trivariate B-spline solid, a continuous and complete porous scaffold is generated. Moreover, we defined a new storage space-saving file format based on the TDF to store porous scaffolds. The experimental results presented in this paper demonstrate the effectiveness and efficiency of the method using a trivariate B-spline solid, as well as the superior space-saving of the proposed storage format.

多孔支架是由大量孔隙组成的三维网状结构,三周期极小面(tpms)是多孔支架设计的常用工具之一。然而,不连续性、不完整性和存储空间要求高是使用tpms设计多孔支架的三个主要缺点。在本研究中,我们开发了一种有效的非均质多孔支架生成方法,以克服多孔支架设计的上述缺点。该方法的输入是一个具有三次参数域的三变量b样条实体。该方法首先在三次参数域中构造阈值分布场(TDF),然后在其中生成连续完整的TPMS。最后,通过将参数域的TPMS映射到三元b样条实体,生成连续完整的多孔支架。此外,我们还定义了一种新的基于TDF的存储空间节省文件格式来存储多孔支架。本文的实验结果证明了该方法的有效性和效率,以及所提出的存储格式的优越节省空间。
{"title":"Heterogeneous porous scaffold generation using trivariate B-spline solids and triply periodic minimal surfaces","authors":"Chuanfeng Hu ,&nbsp;Hongwei Lin","doi":"10.1016/j.gmod.2021.101105","DOIUrl":"10.1016/j.gmod.2021.101105","url":null,"abstract":"<div><p>A porous scaffold is a three-dimensional network structure composed of a large number of pores, and triply periodic minimal surfaces<span> (TPMSs) are one of the conventional tools for designing porous scaffolds. However, discontinuity, incompleteness, and high storage space requirements are the three main shortcomings of porous scaffold design using TPMSs. In this study, we developed an effective method for heterogeneous porous scaffold generation to overcome the abovementioned shortcomings of porous scaffold design. The input of the proposed method is a trivariate B-spline solid with a cubic parametric<span> domain. The proposed method first constructs a threshold distribution field (TDF) in the cubic parametric domain, and then produces a continuous and complete TPMS within it. Finally, by mapping the TPMS in the parametric domain to the trivariate B-spline solid, a continuous and complete porous scaffold is generated. Moreover, we defined a new storage space-saving file format based on the TDF to store porous scaffolds. The experimental results presented in this paper demonstrate the effectiveness and efficiency of the method using a trivariate B-spline solid, as well as the superior space-saving of the proposed storage format.</span></span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101105"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101105","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87003425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Learning 3D face reconstruction from a single sketch 学习3D面部重建从一个单一的草图
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-05-01 DOI: 10.1016/j.gmod.2021.101102
Li Yang , Jing Wu , Jing Huo , Yu-Kun Lai , Yang Gao

3D face reconstruction from a single image is a classic computer vision problem with many applications. However, most works achieve reconstruction from face photos, and little attention has been paid to reconstruction from other portrait forms. In this paper, we propose a learning-based approach to reconstruct a 3D face from a single face sketch. To overcome the problem of no paired sketch-3D data for supervised learning, we introduce a photo-to-sketch synthesis technique to obtain paired training data, and propose a dual-path architecture to achieve synergistic 3D reconstruction from both sketches and photos. We further propose a novel line loss function to refine the reconstruction with characteristic details depicted by lines in sketches well preserved. Our method outperforms the state-of-the-art 3D face reconstruction approaches in terms of reconstruction from face sketches. We also demonstrate the use of our method for easy editing of details on 3D face models.

从单幅图像重建三维人脸是一个经典的计算机视觉问题,具有许多应用。然而,大多数作品都是通过脸部照片来实现重建的,很少有人关注其他肖像形式的重建。在本文中,我们提出了一种基于学习的方法来从单个人脸草图重建三维人脸。为了克服监督学习中没有配对草图-3D数据的问题,我们引入了一种照片-草图合成技术来获得配对训练数据,并提出了一种双路径架构来实现草图和照片的协同三维重建。我们进一步提出了一种新的线损失函数来改进重建,使草图中的线条所描绘的特征细节得到很好的保存。我们的方法在面部草图重建方面优于最先进的3D面部重建方法。我们还演示了如何使用我们的方法轻松编辑3D面部模型上的细节。
{"title":"Learning 3D face reconstruction from a single sketch","authors":"Li Yang ,&nbsp;Jing Wu ,&nbsp;Jing Huo ,&nbsp;Yu-Kun Lai ,&nbsp;Yang Gao","doi":"10.1016/j.gmod.2021.101102","DOIUrl":"10.1016/j.gmod.2021.101102","url":null,"abstract":"<div><p><span>3D face reconstruction from a single image is a classic computer vision problem with many applications. However, most works achieve reconstruction from face photos, and little attention has been paid to reconstruction from other portrait forms. In this paper, we propose a learning-based approach to reconstruct a 3D face from a single face sketch. To overcome the problem of no paired sketch-3D data for supervised learning, we introduce a photo-to-sketch synthesis technique to obtain paired training data, and propose a dual-path architecture to achieve synergistic </span>3D reconstruction from both sketches and photos. We further propose a novel line loss function to refine the reconstruction with characteristic details depicted by lines in sketches well preserved. Our method outperforms the state-of-the-art 3D face reconstruction approaches in terms of reconstruction from face sketches. We also demonstrate the use of our method for easy editing of details on 3D face models.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101102"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101102","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72440248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Orthogonality of isometries in the conformal model of the 3D space 三维空间保形模型中等距的正交性
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-03-01 DOI: 10.1016/j.gmod.2021.101100
Carlile Lavor , Michael Souza , José Luis Aragón

Motivated by questions on orthogonality of isometries, we present a new construction of the conformal model of the 3D space using just elementary linear algebra. In addition to pictures that can help the readers to understand the conformal model, our approach allows to obtain matrix representation of isometries that can be useful, for example, in applications of computational geometry, including computer graphics, robotics, and molecular geometry.

基于等距的正交性问题,提出了一种利用初等线性代数构造三维空间共形模型的新方法。除了可以帮助读者理解共形模型的图片外,我们的方法还允许获得有用的等距的矩阵表示,例如,在计算几何的应用中,包括计算机图形学,机器人技术和分子几何。
{"title":"Orthogonality of isometries in the conformal model of the 3D space","authors":"Carlile Lavor ,&nbsp;Michael Souza ,&nbsp;José Luis Aragón","doi":"10.1016/j.gmod.2021.101100","DOIUrl":"10.1016/j.gmod.2021.101100","url":null,"abstract":"<div><p>Motivated by questions on orthogonality<span> of isometries, we present a new construction of the conformal model of the 3D space using just elementary linear algebra. In addition to pictures that can help the readers to understand the conformal model, our approach allows to obtain matrix representation<span> of isometries that can be useful, for example, in applications of computational geometry<span>, including computer graphics, robotics, and molecular geometry.</span></span></span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"114 ","pages":"Article 101100"},"PeriodicalIF":1.7,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101100","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85976487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hybrid function representation for heterogeneous objects 异构对象的混合函数表示
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-03-01 DOI: 10.1016/j.gmod.2021.101098
A. Tereshin , A. Pasko , O. Fryazinov , V. Adzhiev

Heterogeneous object modelling is an emerging area where geometric shapes are considered in concert with their internal physically-based attributes. This paper describes a novel theoretical and practical framework for modelling volumetric heterogeneous objects on the basis of a novel unifying functionally-based hybrid representation called HFRep. This new representation allows for obtaining a continuous smooth distance field in Euclidean space and preserves the advantages of the conventional representations based on scalar fields of different kinds without their drawbacks. We systematically describe the mathematical and algorithmic basics of HFRep. The steps of the basic algorithm are presented in detail for both geometry and attributes. To solve some problematic issues, we have suggested several practical solutions, including a new algorithm for solving the eikonal equation on hierarchical grids. Finally, we show the practicality of the approach by modelling several representative heterogeneous objects, including those of a time-variant nature.

异构对象建模是一个新兴领域,其中几何形状与其内部基于物理的属性相一致。本文描述了一种新的理论和实践框架,用于在一种新的统一的基于功能的混合表示称为HFRep的基础上建模体积异构对象。这种新的表示可以在欧几里得空间中获得连续的光滑距离场,并且保留了基于不同类型标量场的传统表示的优点,同时又没有缺点。我们系统地描述了HFRep的数学和算法基础。从几何和属性两个方面详细介绍了基本算法的步骤。为了解决一些有问题的问题,我们提出了几个实用的解决方案,包括在分层网格上求解eikonal方程的新算法。最后,我们通过对几个具有代表性的异构对象(包括时变性质的对象)进行建模来展示该方法的实用性。
{"title":"Hybrid function representation for heterogeneous objects","authors":"A. Tereshin ,&nbsp;A. Pasko ,&nbsp;O. Fryazinov ,&nbsp;V. Adzhiev","doi":"10.1016/j.gmod.2021.101098","DOIUrl":"10.1016/j.gmod.2021.101098","url":null,"abstract":"<div><p><span><span>Heterogeneous object modelling is an emerging area where geometric shapes are considered in concert with their internal physically-based attributes. This paper describes a novel theoretical and practical framework for modelling volumetric heterogeneous objects on the basis of a novel unifying functionally-based hybrid representation called HFRep. This new representation allows for obtaining a continuous smooth distance field in </span>Euclidean space and preserves the advantages of the conventional representations based on </span>scalar fields<span> of different kinds without their drawbacks. We systematically describe the mathematical and algorithmic basics of HFRep. The steps of the basic algorithm are presented in detail for both geometry and attributes. To solve some problematic issues, we have suggested several practical solutions, including a new algorithm for solving the eikonal equation on hierarchical grids. Finally, we show the practicality of the approach by modelling several representative heterogeneous objects, including those of a time-variant nature.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"114 ","pages":"Article 101098"},"PeriodicalIF":1.7,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77256678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Normal manipulation for bas-relief modeling 对浅浮雕建模的正常操作
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-03-01 DOI: 10.1016/j.gmod.2021.101099
Zhongping Ji , Xianfang Sun , Yu-Wei Zhang , Weiyin Ma , Mingqiang Wei

We introduce a normal-based modeling framework for bas-relief generation and stylization which is motivated by the recent advancement in this topic. Creating bas-relief from normal images has successfully facilitated bas-relief modeling in image space. However, the use of normal images in previous work is restricted to the cut-and-paste or blending operations of layers. These operations simply treat a normal vector as a pixel of a general color image. This paper is intended to extend normal-based methods by processing the normal image from a geometric perspective. Our method can not only generate a new normal image by combining various frequencies of existing normal images and details transferring, but also build bas-reliefs from a single RGB image and its edge-based sketch lines. In addition, we introduce an auxiliary function to represent a smooth base surface or generate a layered global shape. To integrate above considerations into our framework, we formulate the bas-relief generation as a variational problem which can be solved by a screened Poisson equation. One important advantage of our method is that it can generate more styles than previous methods and thus it expands the bas-relief shape space. We experimented our method on a range of normal images and it compares favorably to other popular classic and state-of-the-art methods.

我们引入了一个基于法线的建模框架,用于浅地形的生成和风格化,这是由该主题的最新进展所激发的。从正常图像中创建浅浮雕成功地促进了图像空间中的浅浮雕建模。然而,在以前的工作中,正常图像的使用仅限于图层的剪切粘贴或混合操作。这些操作简单地将法向量视为一般彩色图像的像素。本文旨在扩展基于法线的方法,从几何角度处理法线图像。该方法不仅可以将现有的各种频率的法线图像和细节转移结合起来生成新的法线图像,而且可以从单个RGB图像及其基于边缘的素描线构建浅浮雕。此外,我们还引入了一个辅助函数来表示光滑的基面或生成分层的全局形状。为了将上述考虑整合到我们的框架中,我们将浅地形生成表述为一个可以通过筛选泊松方程求解的变分问题。该方法的一个重要优点是可以生成比以往方法更多的样式,从而扩大了浅浮雕形状空间。我们在一系列正常图像上实验了我们的方法,它比其他流行的经典和最先进的方法更有优势。
{"title":"Normal manipulation for bas-relief modeling","authors":"Zhongping Ji ,&nbsp;Xianfang Sun ,&nbsp;Yu-Wei Zhang ,&nbsp;Weiyin Ma ,&nbsp;Mingqiang Wei","doi":"10.1016/j.gmod.2021.101099","DOIUrl":"10.1016/j.gmod.2021.101099","url":null,"abstract":"<div><p><span><span>We introduce a normal-based modeling framework for bas-relief generation and stylization which is motivated by the recent advancement in this topic. Creating bas-relief from normal images has successfully facilitated bas-relief modeling in image space. However, the use of normal images in previous work is restricted to the cut-and-paste or blending operations of layers. These operations simply treat a normal vector as a pixel of a general color image. This paper is intended to extend normal-based methods by processing the normal image from a geometric perspective. Our method can not only generate a new normal image by combining various frequencies of existing normal images and details transferring, but also build bas-reliefs from a single </span>RGB<span> image and its edge-based sketch lines. In addition, we introduce an auxiliary function to represent a smooth base surface or generate a layered global shape. To integrate above considerations into our framework, we formulate the bas-relief generation as a </span></span>variational problem<span> which can be solved by a screened Poisson equation. One important advantage of our method is that it can generate more styles than previous methods and thus it expands the bas-relief shape space. We experimented our method on a range of normal images and it compares favorably to other popular classic and state-of-the-art methods.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"114 ","pages":"Article 101099"},"PeriodicalIF":1.7,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101099","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77959613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bas-relief generation from point clouds based on normal space compression with real-time adjustment on CPU 基于标准空间压缩和CPU实时调整的点云浅浮雕生成
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-01-01 DOI: 10.1016/j.gmod.2021.101096
Jianhui Nie , Wenkai Shi , Ye Liu , Hao Gao , Feng Xu , Zhaochen Zhang

This paper presents a bas-relief generation algorithm from scattered point cloud directly. Compared with the popular gradient domain methods for mesh surface, this algorithm takes normal vectors as the operation object, making it independent of topology connection, thus more suitable for point clouds and easier to implement. By constructing linear equations of the bas-relief height and using the solution strategy of the subspace, this algorithm can adjustment the bas-relief effect in real-time relying on the computing power of a consumer CPU only. In addition, we also propose an iterative solution to generate a bas-relief model of a specified height. The experimental results indicate that our algorithm provides a unified solution for generating different types of bas-relief with good saturation and rich details.

本文提出了一种直接从散射点云生成浅地形的算法。与目前流行的网格表面梯度域方法相比,该算法以法向量为运算对象,不依赖于拓扑连接,更适合于点云,更易于实现。该算法通过构造浅浮雕高度的线性方程,采用子空间的求解策略,仅依靠一个消费级CPU的计算能力,即可实时调整浅浮雕效果。此外,我们还提出了一种迭代解来生成指定高度的浅地形模型。实验结果表明,该算法为生成饱和度好、细节丰富的不同类型的浅地形提供了统一的解决方案。
{"title":"Bas-relief generation from point clouds based on normal space compression with real-time adjustment on CPU","authors":"Jianhui Nie ,&nbsp;Wenkai Shi ,&nbsp;Ye Liu ,&nbsp;Hao Gao ,&nbsp;Feng Xu ,&nbsp;Zhaochen Zhang","doi":"10.1016/j.gmod.2021.101096","DOIUrl":"https://doi.org/10.1016/j.gmod.2021.101096","url":null,"abstract":"<div><p><span>This paper presents a bas-relief generation algorithm from scattered point cloud directly. Compared with the popular gradient domain methods for mesh surface, this algorithm takes normal vectors as the operation object, making it independent of topology connection, thus more suitable for point clouds and easier to implement. By constructing </span>linear equations of the bas-relief height and using the solution strategy of the subspace, this algorithm can adjustment the bas-relief effect in real-time relying on the computing power of a consumer CPU only. In addition, we also propose an iterative solution to generate a bas-relief model of a specified height. The experimental results indicate that our algorithm provides a unified solution for generating different types of bas-relief with good saturation and rich details.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"113 ","pages":"Article 101096"},"PeriodicalIF":1.7,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91721705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Surface-based computation of the Euler characteristic in the cubical grid 基于曲面的立方体网格欧拉特性计算
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2020-11-01 DOI: 10.1016/j.gmod.2020.101093
Lidija Čomić , Paola Magillo

For well-composed (manifold) objects in the 3D cubical grid, the Euler characteristic is equal to half of the Euler characteristic of the object boundary, which in turn is equal to the number of boundary vertices minus the number of boundary faces. We extend this formula to arbitrary objects, not necessarily well-composed, by adjusting the count of boundary cells both for vertex- and for face-adjacency. We prove the correctness of our approach by constructing two well-composed polyhedral complexes homotopy equivalent to the given object with the two adjacencies. The proposed formulas for the computation of the Euler characteristic are simple, easy to implement and efficient. Experiments show that our formulas are faster to evaluate than the volume-based ones on realistic inputs, and are faster than the classical surface-based formulas.

对于三维立方体网格中组成良好的(流形)物体,欧拉特征等于物体边界欧拉特征的一半,而欧拉特征又等于边界顶点的数量减去边界面的数量。我们将这个公式扩展到任意对象,不一定是很好的组合,通过调整顶点和面邻接的边界细胞的计数。通过构造两个构造良好的多面体配合物同伦,证明了该方法的正确性。所提出的欧拉特性计算公式简单、易于实现、效率高。实验表明,在实际输入条件下,我们的计算公式比基于体积的计算公式更快,比经典的基于表面的计算公式更快。
{"title":"Surface-based computation of the Euler characteristic in the cubical grid","authors":"Lidija Čomić ,&nbsp;Paola Magillo","doi":"10.1016/j.gmod.2020.101093","DOIUrl":"10.1016/j.gmod.2020.101093","url":null,"abstract":"<div><p><span>For well-composed (manifold) objects in the 3D cubical grid, the Euler characteristic<span> is equal to half of the Euler characteristic of the object boundary, which in turn is equal to the number of boundary vertices minus the number of boundary faces. We extend this formula to arbitrary objects, not necessarily well-composed, by adjusting the count of boundary cells both for vertex- and for face-adjacency. We prove the correctness of our approach by constructing two well-composed polyhedral complexes </span></span>homotopy equivalent to the given object with the two adjacencies. The proposed formulas for the computation of the Euler characteristic are simple, easy to implement and efficient. Experiments show that our formulas are faster to evaluate than the volume-based ones on realistic inputs, and are faster than the classical surface-based formulas.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"112 ","pages":"Article 101093"},"PeriodicalIF":1.7,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2020.101093","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86045947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Graphical Models
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1