Pub Date : 2022-05-01DOI: 10.1016/j.gmod.2022.101136
Qing Huang, Wen-Xiang Zhang, Qi Wang, Ligang Liu, Xiao-Ming Fu
We propose a novel method to untangle and optimize all-hex meshes. Central to this algorithm is an adaptive boundary optimization process that significantly improves practical robustness. Given an all-hex mesh with many inverted hexahedral elements, we first optimize a high-quality quad boundary mesh with a small approximation error to the input boundary. Since the boundary constraints limit the optimization space to search for the inversion-free meshes, we then relax the boundary constraints to generate an inversion-free all-hex mesh. We develop an adaptive boundary relaxation algorithm to implicitly restrict the shape difference between the relaxed and input boundaries, thereby facilitating the next step. Finally, an adaptive boundary difference minimization is developed to effectively and efficiently force the distance difference between the relaxed boundary and the optimized boundary of the first step to approach zero while avoiding inverted elements. We demonstrate the efficacy of our algorithm on a data set containing 1004 all-hex meshes. Compared to previous methods, our method achieves higher practical robustness.
{"title":"Untangling all-hex meshes via adaptive boundary optimization","authors":"Qing Huang, Wen-Xiang Zhang, Qi Wang, Ligang Liu, Xiao-Ming Fu","doi":"10.1016/j.gmod.2022.101136","DOIUrl":"https://doi.org/10.1016/j.gmod.2022.101136","url":null,"abstract":"<div><p>We propose a novel method to untangle and optimize all-hex meshes. Central to this algorithm is an adaptive boundary optimization process that significantly improves practical robustness. Given an all-hex mesh with many inverted hexahedral elements, we first optimize a high-quality quad boundary mesh with a small approximation<span> error to the input boundary. Since the boundary constraints limit the optimization space to search for the inversion-free meshes, we then relax the boundary constraints to generate an inversion-free all-hex mesh. We develop an adaptive boundary relaxation algorithm to implicitly restrict the shape difference between the relaxed and input boundaries, thereby facilitating the next step. Finally, an adaptive boundary difference minimization is developed to effectively and efficiently force the distance difference between the relaxed boundary and the optimized boundary of the first step to approach zero while avoiding inverted elements. We demonstrate the efficacy of our algorithm on a data set containing 1004 all-hex meshes. Compared to previous methods, our method achieves higher practical robustness.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"121 ","pages":"Article 101136"},"PeriodicalIF":1.7,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72219342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1016/j.gmod.2022.101138
Yeying Fan , Qian Ma , Guangshun Wei , Zhiming Cui , Yuanfeng Zhou , Wenping Wang
The tooth axes, defined on 3D tooth model, play a key role in digital orthodontics, which is usually used as an important reference in automatic tooth arrangement and anomaly detection. In this paper, we propose an automatic deep learning network (TAD-Net) of tooth axis detection based on rotation transformation encoding. By utilizing quaternion transformation, we convert the geometric rotation transformation of the tooth axes into the feature encoding of the point cloud of 3D tooth models. Furthermore, the feature confidence-aware attention mechanism is adopted to generate dynamic weights for the features of each point to improve the network learning accuracy. Experimental results show that the proposed method has achieved higher detection accuracy on the constructed dental data set compared with the existing networks.
{"title":"TAD-Net: tooth axis detection network based on rotation transformation encoding","authors":"Yeying Fan , Qian Ma , Guangshun Wei , Zhiming Cui , Yuanfeng Zhou , Wenping Wang","doi":"10.1016/j.gmod.2022.101138","DOIUrl":"10.1016/j.gmod.2022.101138","url":null,"abstract":"<div><p>The tooth axes, defined on 3D tooth model, play a key role in digital orthodontics, which is usually used as an important reference in automatic tooth arrangement and anomaly detection<span>. In this paper, we propose an automatic deep learning network (TAD-Net) of tooth axis detection based on rotation transformation encoding. By utilizing quaternion transformation, we convert the geometric rotation transformation of the tooth axes into the feature encoding of the point cloud of 3D tooth models. Furthermore, the feature confidence-aware attention mechanism is adopted to generate dynamic weights for the features of each point to improve the network learning accuracy. Experimental results show that the proposed method has achieved higher detection accuracy on the constructed dental data set compared with the existing networks.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"121 ","pages":"Article 101138"},"PeriodicalIF":1.7,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83982159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1016/j.gmod.2022.101135
Han Chen, Minghai Chen, Lin Lu
Recent advances in the design and fabrication of personalized figurines have made the creation of high-quality figurines possible for ordinary users with the facilities of 3D printing techniques. The hair plays an important role in gaining the realism of the figurines. Existing hair reconstruction methods suffer from the high demand for acquisition equipment, or the result is approximated very coarsely. Instead of creating hairs for figurines by scanning devices, we present a novel surface reconstruction method to generate a 3D printable hair model with geometric features from a strand-level hairstyle, thus converting the exiting digital hair database to a 3D printable database. Given a strand-level hair model, we filter the strands via bundle clustering, retain the main features, and reconstruct hair strands in two stages. First, our algorithm is the key to extracting the hair contour surface according to the structure of strands and calculating the normal for each vertex. Next, a close, manifold triangle mesh with geometric details and an embedded direction field is achieved with the Poisson surface reconstruction. We obtain closed-manifold hairstyles without user interactions, benefiting personalized figurine fabrication. We verify the feasibility of our method by exhibiting a wide range of examples.
{"title":"3D Printed hair modeling from strand-level hairstyles","authors":"Han Chen, Minghai Chen, Lin Lu","doi":"10.1016/j.gmod.2022.101135","DOIUrl":"https://doi.org/10.1016/j.gmod.2022.101135","url":null,"abstract":"<div><p><span><span>Recent advances in the design and fabrication of personalized figurines have made the creation of high-quality figurines possible for ordinary users with the facilities of 3D printing<span> techniques. The hair plays an important role in gaining the realism of the figurines. Existing hair reconstruction methods suffer from the high demand for acquisition equipment, or the result is approximated very coarsely. Instead of creating hairs for figurines by scanning devices, we present a novel surface reconstruction method to generate a 3D printable hair model with geometric features from a strand-level hairstyle, thus converting the exiting digital hair database to a 3D printable database. Given a strand-level hair model, we filter the strands via bundle clustering, retain the main features, and reconstruct hair strands in two stages. First, our algorithm is the key to extracting the hair contour surface according to the structure of strands and calculating the normal for each vertex. Next, a close, manifold triangle mesh with </span></span>geometric details and an embedded </span>direction field is achieved with the Poisson surface reconstruction. We obtain closed-manifold hairstyles without user interactions, benefiting personalized figurine fabrication. We verify the feasibility of our method by exhibiting a wide range of examples.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"121 ","pages":"Article 101135"},"PeriodicalIF":1.7,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72219324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1016/j.gmod.2022.101140
Lang Zhou , Guoxing Sun , Yong Li , Weiqing Li , Zhiyong Su
Over the past decade, we have witnessed an enormous amount of research effort dedicated to the design of point cloud denoising techniques. In this article, we first provide a comprehensive survey on state-of-the-art denoising solutions, which are mainly categorized into three classes: filter-based, optimization-based, and deep learning-based techniques. Methods of each class are analyzed and discussed in detail. This is done using a benchmark on different denoising models, taking into account different aspects of denoising challenges. We also review two kinds of quality assessment methods designed for evaluating denoising quality. A comprehensive comparison is performed to cover several popular or state-of-the-art methods, together with insightful observations. Finally, we discuss open challenges and future research directions in identifying new point cloud denoising strategies.
{"title":"Point cloud denoising review: from classical to deep learning-based approaches","authors":"Lang Zhou , Guoxing Sun , Yong Li , Weiqing Li , Zhiyong Su","doi":"10.1016/j.gmod.2022.101140","DOIUrl":"https://doi.org/10.1016/j.gmod.2022.101140","url":null,"abstract":"<div><p>Over the past decade, we have witnessed an enormous amount of research effort dedicated to the design of point cloud denoising techniques. In this article, we first provide a comprehensive survey on state-of-the-art denoising solutions, which are mainly categorized into three classes: filter-based, optimization-based, and deep learning-based techniques. Methods of each class are analyzed and discussed in detail. This is done using a benchmark on different denoising models, taking into account different aspects of denoising challenges. We also review two kinds of quality assessment methods designed for evaluating denoising quality. A comprehensive comparison is performed to cover several popular or state-of-the-art methods, together with insightful observations. Finally, we discuss open challenges and future research directions in identifying new point cloud denoising strategies.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"121 ","pages":"Article 101140"},"PeriodicalIF":1.7,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72219325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1016/j.gmod.2022.101134
Chengzhi Liu, Juncheng Li, Lijuan Hu
Based on the Jacobi splitting of collocation matrices, we in this paper exploited the Jacobi–PIA format for bi-cubic B-spline surfaces. We first present the Jacobi–PIA scheme in term of matrix product, which has higher computational efficiency than that in term of matrix-vector product. To analyze the convergence of Jacobi–PIA, we transform the matrix product iterative scheme into the equivalent matrix-vector product scheme by using the properties of the Kronecker product. We showed that with the optimal relaxation factor, the Jacobi–PIA format for bi-cubic B-spline surface converges to the interpolation surface. Numerical results also demonstrated the effectiveness of the proposed method.
{"title":"Jacobi–PIA algorithm for bi-cubic B-spline interpolation surfaces","authors":"Chengzhi Liu, Juncheng Li, Lijuan Hu","doi":"10.1016/j.gmod.2022.101134","DOIUrl":"10.1016/j.gmod.2022.101134","url":null,"abstract":"<div><p><span>Based on the Jacobi splitting of collocation matrices, we in this paper exploited the Jacobi–PIA format for bi-cubic B-spline surfaces. We first present the Jacobi–PIA scheme in term of matrix product<span>, which has higher computational efficiency than that in term of matrix-vector product. To analyze the convergence of Jacobi–PIA, we transform the matrix product iterative scheme into the equivalent matrix-vector product scheme by using the properties of the </span></span>Kronecker product. We showed that with the optimal relaxation factor, the Jacobi–PIA format for bi-cubic B-spline surface converges to the interpolation surface. Numerical results also demonstrated the effectiveness of the proposed method.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"120 ","pages":"Article 101134"},"PeriodicalIF":1.7,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85559981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1016/j.gmod.2022.101137
Erkan Gunpinar, Serhat Cam
Layer-by-layer additive manufacturing is commonly utilized for additive manufacturing. Recent works utilize curved layers (rather than planar ones), on which print-paths are located, and outline their advantage over planar slicing. In this paper, free-form three-dimensional curves are utilized as input for the generation of print-paths, which covers the model to be printed and do not necessarily lie on either a planar or a curved layer. Such print-paths have been recently studied for 3-axis additive manufacturing, and a novel additive manufacturing process for the models represented using such curves are proposed for 4 and 5-axis additive manufacturing in this paper. The input curves are first subdivided into short sub-curves (i.e., segments), which are then merged to obtain print-paths with (collision-free) printing-head orientations along them. Thanks to additional two rotational axes of the printing-head, a less number of print-paths can potentially be obtained, which can reduce subdivisions in the input curves, and therefore, is desirable in additive manufacturing for improved mechanical properties in the printed parts. As a proof of concept, the print-paths with printing-head orientations along them are finally validated using an AM simulator and machine.
{"title":"4 and 5-Axis additive manufacturing of parts represented using free-form 3D curves","authors":"Erkan Gunpinar, Serhat Cam","doi":"10.1016/j.gmod.2022.101137","DOIUrl":"10.1016/j.gmod.2022.101137","url":null,"abstract":"<div><p>Layer-by-layer additive manufacturing is commonly utilized for additive manufacturing. Recent works utilize curved layers (rather than planar ones), on which print-paths are located, and outline their advantage over planar slicing. In this paper, free-form three-dimensional curves are utilized as input for the generation of print-paths, which covers the model to be printed and do not necessarily lie on either a planar or a curved layer. Such print-paths have been recently studied for 3-axis additive manufacturing, and a novel additive manufacturing process for the models represented using such curves are proposed for 4 and 5-axis additive manufacturing in this paper. The input curves are first subdivided into short sub-curves (i.e., segments), which are then merged to obtain print-paths with (collision-free) printing-head orientations along them. Thanks to additional two rotational axes of the printing-head, a less number of print-paths can potentially be obtained, which can reduce subdivisions in the input curves, and therefore, is desirable in additive manufacturing for improved mechanical properties in the printed parts. As a proof of concept, the print-paths with printing-head orientations along them are finally validated using an AM simulator and machine.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"120 ","pages":"Article 101137"},"PeriodicalIF":1.7,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75098090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.gmod.2021.101123
Mislene da Silva Nunes , Methanias Colaço Júnior , Gastão Florêncio Miranda Jr. , Beatriz Trinchão Andrade
Context
The Bidirectional Reflectance Distribution Function (BRDF) represents a material through the incoming light on its surface. In this context, material clustering contributes to selecting a basis of representative BRDFs, the reconstruction of BRDFs, the personalization of the appearance of materials, and image-based estimation of material properties.
Objective
This work presents an approach to cluster a BRDF database according to its reflectance features.
Method
We first preprocess a BRDF database by mapping it to an image slice database and then find the best parameters for the LLE method through an empirical analysis, retrieving lower-dimensional databases. We performed a controlled experiment using the k-means, k-medoids, and spectral clustering algorithms applied to the low-dimensional databases.
Conclusion
K-means presented the best overall result compared to the other clustering algorithms. For applications that require cluster representatives from the database, we suggest using k-medoids, which presented results close to those of the k-means.
{"title":"An Approach to Preprocess and Cluster a BRDF Database","authors":"Mislene da Silva Nunes , Methanias Colaço Júnior , Gastão Florêncio Miranda Jr. , Beatriz Trinchão Andrade","doi":"10.1016/j.gmod.2021.101123","DOIUrl":"https://doi.org/10.1016/j.gmod.2021.101123","url":null,"abstract":"<div><h3>Context</h3><p>The Bidirectional Reflectance Distribution Function (BRDF) represents a material through the incoming light on its surface. In this context, material clustering contributes to selecting a basis of representative BRDFs, the reconstruction of BRDFs, the personalization of the appearance of materials, and image-based estimation of material properties.</p></div><div><h3>Objective</h3><p>This work presents an approach to cluster a BRDF database according to its reflectance features.</p></div><div><h3>Method</h3><p>We first preprocess a BRDF database by mapping it to an image slice database and then find the best parameters for the LLE method through an empirical analysis, retrieving lower-dimensional databases. We performed a controlled experiment using the k-means, k-medoids, and spectral clustering algorithms applied to the low-dimensional databases.</p></div><div><h3>Conclusion</h3><p>K-means presented the best overall result compared to the other clustering algorithms. For applications that require cluster representatives from the database, we suggest using k-medoids, which presented results close to those of the k-means.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"119 ","pages":"Article 101123"},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91764504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.gmod.2021.101121
Xinwei Huang , Nannan Li , Qing Xia , Shuai Li , Aimin Hao , Hong Qin
Discriminative and informative 3D shape descriptors are of fundamental significance to computer graphics applications, especially in the fields of geometry modeling and shape analysis. 3D shape descriptors, which reveal extrinsic/intrinsic properties of 3D shapes, have been well studied for decades and proved to be useful and effective in various analysis and synthesis tasks. Nonetheless, existing descriptors are mainly founded upon certain local differential attributes or global shape spectra, and certain combinations of both types. Conventional descriptors are typically customized for specific tasks with priori domain knowledge, which severely prevents their applications from widespread use. Recently, neural networks, benefiting from their powerful data-driven capability for general feature extraction from raw data without any domain knowledge, have achieved great success in many areas including shape analysis. In this paper, we present a novel hybrid fusion network (HFN) that learns multi-scale and multi-level shape representations via uniformly integrating a traditional region-based descriptor with modern neural networks. On one hand, we exploit the spectral graph wavelets (SGWs) to extract the shapes’ local-to-global features. On the other hand, the shapes are fed into a convolutional neural network to generate multi-level features simultaneously. Then a hierarchical fusion network learns a general and unified representation from these two different types of features which capture multi-scale and multi-level properties of the underlying shapes. Extensive experiments and comprehensive comparisons demonstrate our HFN can achieve better performance in common shape analysis tasks, such as shape retrieval and recognition, and the learned hybrid descriptor is robust, informative, and discriminative with more potential for widespread applications.
{"title":"Multi-scale and multi-level shape descriptor learning via a hybrid fusion network","authors":"Xinwei Huang , Nannan Li , Qing Xia , Shuai Li , Aimin Hao , Hong Qin","doi":"10.1016/j.gmod.2021.101121","DOIUrl":"https://doi.org/10.1016/j.gmod.2021.101121","url":null,"abstract":"<div><p><span>Discriminative and informative 3D shape<span> descriptors are of fundamental significance to computer graphics<span> applications, especially in the fields of geometry modeling and shape analysis. 3D shape descriptors, which reveal extrinsic/intrinsic properties of 3D shapes, have been well studied for decades and proved to be useful and effective in various analysis and synthesis tasks. Nonetheless, existing descriptors are mainly founded upon certain local differential attributes or global shape spectra, and certain combinations of both types. Conventional descriptors are typically customized for specific tasks with priori domain knowledge, which severely prevents their applications from widespread use. Recently, neural networks, benefiting from their powerful data-driven capability for general feature extraction from raw data without any domain knowledge, have achieved great success in many areas including shape analysis. In this paper, we present a novel hybrid fusion network (HFN) that learns multi-scale and multi-level shape representations via uniformly integrating a traditional region-based descriptor with modern neural networks. On one hand, we exploit the spectral graph wavelets (SGWs) to extract the shapes’ local-to-global features. On the other hand, the shapes are fed into a </span></span></span>convolutional neural network to generate multi-level features simultaneously. Then a hierarchical fusion network learns a general and unified representation from these two different types of features which capture multi-scale and multi-level properties of the underlying shapes. Extensive experiments and comprehensive comparisons demonstrate our HFN can achieve better performance in common shape analysis tasks, such as shape retrieval and recognition, and the learned hybrid descriptor is robust, informative, and discriminative with more potential for widespread applications.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"119 ","pages":"Article 101121"},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90028205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, approaches based on graph convolutional networks (GCNs) have achieved state-of-the-art performance in point cloud learning. The typical pipeline of GCNs is modeled as a two-stage learning process: graph construction and feature learning. We argue that such process exhibits low efficiency because a high percentage of the total time is consumed during the graph construction process when a large amount of sparse data are required to be accessed rather than on actual feature learning. To alleviate this problem, we propose a graph-based parallel branch network (Graph-PBN) that introduces a parallel branch structure to point cloud learning in this study. In particular, Graph-PBN is composed of two branches: the PointNet branch and the GCN branch. PointNet exhibits advantages in memory access and computational cost, while GCN behaves better in local context modeling. The two branches are combined in our architecture to utilize the potential of PointNet and GCN fully, facilitating the achievement of efficient and accurate recognition results. To better aggregate the features of each node in GCN, we investigate a novel operator, called EAGConv, to augment their local context by fully utilizing geometric and semantic features in a local graph. We conduct experiments on several benchmark datasets, and experiment results validate the significant performance of our method compared with other state-of-the-art approaches. Our code will be made publicly available at https://github.com/zhangcheng828/Graph-PBN.
{"title":"Graph-PBN: Graph-based parallel branch network for efficient point cloud learning","authors":"Cheng Zhang, Hao Chen, Haocheng Wan, Ping Yang, Zizhao Wu","doi":"10.1016/j.gmod.2021.101120","DOIUrl":"https://doi.org/10.1016/j.gmod.2021.101120","url":null,"abstract":"<div><p><span><span><span>In recent years, approaches based on graph convolutional networks (GCNs) have achieved state-of-the-art performance in point cloud learning. The typical pipeline of GCNs is modeled as a two-stage learning process: </span>graph construction and </span>feature learning<span>. We argue that such process exhibits low efficiency because a high percentage of the total time is consumed during the graph construction process when a large amount of sparse data are required to be accessed rather than on actual feature learning. To alleviate this problem, we propose a graph-based parallel branch network (Graph-PBN) that introduces a parallel branch structure to point cloud learning in this study. In particular, Graph-PBN is composed of two branches: the PointNet branch and the GCN branch. PointNet exhibits advantages in memory access and computational cost, while GCN behaves better in local context modeling. The two branches are combined in our architecture to utilize the potential of PointNet and GCN fully, facilitating the achievement of efficient and accurate recognition results. To better aggregate the features of each node in GCN, we investigate a novel operator, called EAGConv, to augment their local context by fully utilizing geometric and semantic features in a local graph. We conduct experiments on several benchmark datasets, and experiment results validate the significant performance of our method compared with other state-of-the-art approaches. Our code will be made publicly available at </span></span><span>https://github.com/zhangcheng828/Graph-PBN</span><svg><path></path></svg>.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"119 ","pages":"Article 101120"},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91725397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.gmod.2021.101122
Jian Zhang , Chen Li , Peichi Zhou , Changbo Wang , Gaoqi He , Hong Qin
The appearance styles of natural terrains vary significantly from region to region in real world, and there is a strong need to effectively produce realistic terrain with certain style in computer graphics. In this paper, we advocate a novel neural network approach to the rapid synthesis of multi-style terrains that could directly learn and infer from real terrain data. The key idea is to explicitly devise a conditional generative adversarial network (GAN) which encourages and favors the maximum-distance embedding of acquired styles in the latent space. Towards this functionality, we first collect a dataset that exhibits apparent terrain style diversity in their style attributes. Second, we design multiple discriminators that can distinguish different terrain styles. Third, we employ discriminators to extract terrain features in different spatial scales, so that the developed generator can produce new terrains by fusing the finer-scale and coarser-scale styles. In our experiments, we collect 10 typical terrain datasets from real terrain data that cover a wide range of regions. Our approach successfully generates realistic terrains with global-to-local style control. The experimental results have confirmed our neural network can produce natural terrains with high fidelity, which are user-friendly to style interpolation and style mixing for the terrain authoring task.
{"title":"Authoring multi-style terrain with global-to-local control","authors":"Jian Zhang , Chen Li , Peichi Zhou , Changbo Wang , Gaoqi He , Hong Qin","doi":"10.1016/j.gmod.2021.101122","DOIUrl":"https://doi.org/10.1016/j.gmod.2021.101122","url":null,"abstract":"<div><p><span>The appearance styles of natural terrains vary significantly from region to region in real world, and there is a strong need to effectively produce realistic terrain with certain style in computer graphics<span><span>. In this paper, we advocate a novel neural network approach to the rapid synthesis of multi-style terrains that could directly learn and infer from real terrain data. The key idea is to explicitly devise a conditional </span>generative adversarial network (GAN) which encourages and favors the maximum-distance embedding of acquired styles in the latent space. Towards this functionality, we first collect a dataset that exhibits apparent terrain style diversity in their style attributes. Second, we design multiple </span></span>discriminators<span> that can distinguish different terrain styles. Third, we employ discriminators to extract terrain features in different spatial scales, so that the developed generator can produce new terrains by fusing the finer-scale and coarser-scale styles. In our experiments, we collect 10 typical terrain datasets from real terrain data that cover a wide range of regions. Our approach successfully generates realistic terrains with global-to-local style control. The experimental results have confirmed our neural network can produce natural terrains with high fidelity, which are user-friendly to style interpolation and style mixing for the terrain authoring task.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"119 ","pages":"Article 101122"},"PeriodicalIF":1.7,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91764503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}