There are several methods that reconstruct surfaces from volume data by generating triangle or quad meshes on the dual of the uniform grid. Those methods often provide meshes with better quality than the famous marching cubes. However, they have a common issue: the meshes are not guaranteed to be manifold. We address this issue by presenting a post-processing routine that resolves all non-manifold edges with local refinement. New vertices are positioned on the trilinear interpolant. We verify our method on a wide range of data sets and show that we are capable of resolving all non-manifold issues
{"title":"Resolving Non-Manifoldness on Meshes from Dual Marching Cubes","authors":"D. Zint, R. Grosso, Philipp Gürtler","doi":"10.2312/egs.20221029","DOIUrl":"https://doi.org/10.2312/egs.20221029","url":null,"abstract":"There are several methods that reconstruct surfaces from volume data by generating triangle or quad meshes on the dual of the uniform grid. Those methods often provide meshes with better quality than the famous marching cubes. However, they have a common issue: the meshes are not guaranteed to be manifold. We address this issue by presenting a post-processing routine that resolves all non-manifold edges with local refinement. New vertices are positioned on the trilinear interpolant. We verify our method on a wide range of data sets and show that we are capable of resolving all non-manifold issues","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"8 1","pages":"45-48"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74668693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Sample Budget Allocation for MIS","authors":"László Szirmay-Kalos, M. Sbert","doi":"10.2312/egs.20221022","DOIUrl":"https://doi.org/10.2312/egs.20221022","url":null,"abstract":"","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"76 12 1","pages":"17-20"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87856201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most current research on automatically captioning and describing scenes with spatial content focuses on images. We outline that generating descriptive text for a synthesized 3D scene can be achieved via a suitable intermediate representation employed in the synthesis algorithm. As an example, we synthesize scenes of medieval village settings, and generate their descriptions. Our system employs graph grammars, Markov Chain Monte Carlo optimization, and a natural language generation pipeline. Randomly placed objects are evaluated and optimized by a cost function capturing neighborhood relations, path layouts, and collisions. Further, in a pilot study we assess the performance of our framework by comparing the generated descriptions to others provided by human subjects. While the latter were often short and low-effort, the highest-rated ones clearly outperform our generated ones. Nevertheless, the average of all collected human descriptions was indeed rated by the study participants as being less accurate than the automated ones. CCS Concepts • Computing methodologies → Computer graphics; Natural language generation; The scene consists of three roads meeting at an intersection, a group of trees, an oak tree and three market stands. The three market stands are next to the first road. The group of trees consists of three pine trees and three bushes. The first market stand consists of a sign to the right of a table. A big pot of stew is in the middle of this table. The second market stand consists of a sign besides of a table. A big pot of stew is in the middle of this table. The third market stand consists of three flowerpots on top of a table and a sign. This sign is to the right of this table. Figure 1: (Left:) Example of procedurally generated 3D scene. (Right:) Automatically generated description with our framework.
{"title":"Scene Synthesis with Automated Generation of Textual Descriptions","authors":"Julian Müller-Huschke, Marcel Ritter, M. Harders","doi":"10.2312/egs.20221026","DOIUrl":"https://doi.org/10.2312/egs.20221026","url":null,"abstract":"Most current research on automatically captioning and describing scenes with spatial content focuses on images. We outline that generating descriptive text for a synthesized 3D scene can be achieved via a suitable intermediate representation employed in the synthesis algorithm. As an example, we synthesize scenes of medieval village settings, and generate their descriptions. Our system employs graph grammars, Markov Chain Monte Carlo optimization, and a natural language generation pipeline. Randomly placed objects are evaluated and optimized by a cost function capturing neighborhood relations, path layouts, and collisions. Further, in a pilot study we assess the performance of our framework by comparing the generated descriptions to others provided by human subjects. While the latter were often short and low-effort, the highest-rated ones clearly outperform our generated ones. Nevertheless, the average of all collected human descriptions was indeed rated by the study participants as being less accurate than the automated ones. CCS Concepts • Computing methodologies → Computer graphics; Natural language generation; The scene consists of three roads meeting at an intersection, a group of trees, an oak tree and three market stands. The three market stands are next to the first road. The group of trees consists of three pine trees and three bushes. The first market stand consists of a sign to the right of a table. A big pot of stew is in the middle of this table. The second market stand consists of a sign besides of a table. A big pot of stew is in the middle of this table. The third market stand consists of three flowerpots on top of a table and a sign. This sign is to the right of this table. Figure 1: (Left:) Example of procedurally generated 3D scene. (Right:) Automatically generated description with our framework.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"27 1","pages":"33-36"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86669347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human body 3D reconstruction has a wide range of applications including 3D-printing, art, games, and even technical sport analysis. This is a challenging problem due to 2D ambiguity, diversity of human poses, and costs in obtaining multiple views. We propose a novel optimisation scheme that bypasses the prior bias bottleneck and the 2D-pose annotation bottleneck that we identify in single-view reconstruction, and move towards low-resource photo-realistic 3D reconstruction that directly and fully utilises the target image. Our scheme combines domain-specific method SMPLify-X and domain-agnostic inverse rendering method redner, with two simple yet powerful techniques. We demonstrate that our techniques can 1) improve the accuracy of the reconstructed body both qualitatively and quantitatively for challenging inputs, and 2) control optimisation to a selected part only. Our ideas promise extension to more difficult problems and domains even beyond human body reconstruction. CCS Concepts • Computing methodologies → Reconstruction; Computer vision; Rendering; Ray tracing;
{"title":"Simple Techniques for a Novel Human Body Pose Optimisation Using Differentiable Inverse Rendering","authors":"Munkhtulga Battogtokh, R. Borgo","doi":"10.2312/egs.20221018","DOIUrl":"https://doi.org/10.2312/egs.20221018","url":null,"abstract":"Human body 3D reconstruction has a wide range of applications including 3D-printing, art, games, and even technical sport analysis. This is a challenging problem due to 2D ambiguity, diversity of human poses, and costs in obtaining multiple views. We propose a novel optimisation scheme that bypasses the prior bias bottleneck and the 2D-pose annotation bottleneck that we identify in single-view reconstruction, and move towards low-resource photo-realistic 3D reconstruction that directly and fully utilises the target image. Our scheme combines domain-specific method SMPLify-X and domain-agnostic inverse rendering method redner, with two simple yet powerful techniques. We demonstrate that our techniques can 1) improve the accuracy of the reconstructed body both qualitatively and quantitatively for challenging inputs, and 2) control optimisation to a selected part only. Our ideas promise extension to more difficult problems and domains even beyond human body reconstruction. CCS Concepts • Computing methodologies → Reconstruction; Computer vision; Rendering; Ray tracing;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"7 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87479989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose several graph partitioning algorithms for improving the performance of rigid body simulations. The algorithms operate on the graph formed by rigid bodies (nodes) and constraints (edges), producing non-overlapping and contiguous sub-systems that can be simulated in parallel by a domain decomposition technique. We demonstrate that certain partitioning algorithms reduce the computational time of the solver, and graph refinement techniques that reduce coupling between sub-systems, such as the Kernighan–Lin and Fiduccia–Mattheyses algorithms, give additional performance improvements.
{"title":"Graph Partitioning Algorithms for Rigid Body Simulations","authors":"Yinchu Liu, S. Andrews","doi":"10.2312/egs.20221036","DOIUrl":"https://doi.org/10.2312/egs.20221036","url":null,"abstract":"We propose several graph partitioning algorithms for improving the performance of rigid body simulations. The algorithms operate on the graph formed by rigid bodies (nodes) and constraints (edges), producing non-overlapping and contiguous sub-systems that can be simulated in parallel by a domain decomposition technique. We demonstrate that certain partitioning algorithms reduce the computational time of the solver, and graph refinement techniques that reduce coupling between sub-systems, such as the Kernighan–Lin and Fiduccia–Mattheyses algorithms, give additional performance improvements.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"918 1","pages":"73-76"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77026745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"43rd Annual Conference of the European Association for Computer Graphics, Eurographics 2022 - Short Papers, Reims, France, April 25-29, 2022","authors":"","doi":"10.2312/2633167","DOIUrl":"https://doi.org/10.2312/2633167","url":null,"abstract":"","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74011119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a neural-network-based compression method to alleviate the storage cost of motion capture data. Human mo-tions such as locomotion, often consist of periodic movements. We leverage this periodicity by applying Fourier features to a multilayered perceptron network. Our novel algorithm finds a set of Fourier feature frequencies based on the discrete cosine transformation (DCT) of motion. During training, we incrementally added a dominant frequency of the DCT to a current set of Fourier feature frequencies until a given quality threshold was satisfied. We conducted an experiment using CMU motion dataset, and the results suggest that our method achieves overall high compression ratio while maintaining its quality.
{"title":"Neural Motion Compression with Frequency-adaptive Fourier Feature Network","authors":"Kenji Tojo, Yifei Chen, Nobuyuki Umetani","doi":"10.2312/egs.20221033","DOIUrl":"https://doi.org/10.2312/egs.20221033","url":null,"abstract":"We present a neural-network-based compression method to alleviate the storage cost of motion capture data. Human mo-tions such as locomotion, often consist of periodic movements. We leverage this periodicity by applying Fourier features to a multilayered perceptron network. Our novel algorithm finds a set of Fourier feature frequencies based on the discrete cosine transformation (DCT) of motion. During training, we incrementally added a dominant frequency of the DCT to a current set of Fourier feature frequencies until a given quality threshold was satisfied. We conducted an experiment using CMU motion dataset, and the results suggest that our method achieves overall high compression ratio while maintaining its quality.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"61 1","pages":"61-64"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80619045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Roh, S. Kim, Hanyoung Jang, Yeongho Seol, Jongmin Kim
The ability to manipulate facial animations interactively is vital for enhancing the productivity and quality of character animation. In this paper, we present a novel interactive facial animation editing system that can express the naturalness of non-linear facial movements in real-time. The proposed system is based on a fully automatic algorithm that maintains all positional constraints while deforming the facial mesh as realistic as possible. Our method is based on direct manipulation with non-linear blendshape interpolation. We formulate the facial animation editing as a two-step quadratic minimization and solve it efficiently. From our results, the proposed method produces the desired and realistic facial animation better compared to existing mesh deformation methods, which are mainly based on linear combination and optimization.
{"title":"Interactive Facial Expression Editing with Non-linear Blendshape Interpolation","authors":"J. Roh, S. Kim, Hanyoung Jang, Yeongho Seol, Jongmin Kim","doi":"10.2312/egs.20221035","DOIUrl":"https://doi.org/10.2312/egs.20221035","url":null,"abstract":"The ability to manipulate facial animations interactively is vital for enhancing the productivity and quality of character animation. In this paper, we present a novel interactive facial animation editing system that can express the naturalness of non-linear facial movements in real-time. The proposed system is based on a fully automatic algorithm that maintains all positional constraints while deforming the facial mesh as realistic as possible. Our method is based on direct manipulation with non-linear blendshape interpolation. We formulate the facial animation editing as a two-step quadratic minimization and solve it efficiently. From our results, the proposed method produces the desired and realistic facial animation better compared to existing mesh deformation methods, which are mainly based on linear combination and optimization.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"9 1","pages":"69-72"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85312322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying-Qing Xu, Jérémy Riviere, G. Zoss, P. Chandran, D. Bradley, P. Gotardo
Facial appearance capture techniques estimate geometry and reflectance properties of facial skin by performing a computationally intensive inverse rendering optimization in which one or more images are re-rendered a large number of times and compared to real images coming from multiple cameras. Due to the high computational burden, these techniques often make several simplifying assumptions to tame complexity and make the problem more tractable. For example, it is common to assume that the scene consists of only distant light sources, and ignore indirect bounces of light (on the surface and within the surface). Also, methods based on polarized lighting often simplify the light interaction with the surface and assume perfect separation of diffuse and specular reflectance. In this paper, we move in the opposite direction and demonstrate the impact on facial appearance capture quality when departing from these idealized conditions towards models that seek to more accurately represent the lighting, while at the same time minimally increasing computational burden. We compare the results obtained with a state-of-the-art appearance capture method [RGB ∗ 20], with and without our proposed improvements to the lighting model.
{"title":"Improved Lighting Models for Facial Appearance Capture","authors":"Ying-Qing Xu, Jérémy Riviere, G. Zoss, P. Chandran, D. Bradley, P. Gotardo","doi":"10.2312/egs.20221019","DOIUrl":"https://doi.org/10.2312/egs.20221019","url":null,"abstract":"Facial appearance capture techniques estimate geometry and reflectance properties of facial skin by performing a computationally intensive inverse rendering optimization in which one or more images are re-rendered a large number of times and compared to real images coming from multiple cameras. Due to the high computational burden, these techniques often make several simplifying assumptions to tame complexity and make the problem more tractable. For example, it is common to assume that the scene consists of only distant light sources, and ignore indirect bounces of light (on the surface and within the surface). Also, methods based on polarized lighting often simplify the light interaction with the surface and assume perfect separation of diffuse and specular reflectance. In this paper, we move in the opposite direction and demonstrate the impact on facial appearance capture quality when departing from these idealized conditions towards models that seek to more accurately represent the lighting, while at the same time minimally increasing computational burden. We compare the results obtained with a state-of-the-art appearance capture method [RGB ∗ 20], with and without our proposed improvements to the lighting model.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"61 4","pages":"5-8"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72573149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel encoder/decoder-based neural network architecture that learns view-dependent shape and appearance of geometry represented by voxel representations. Since the network is trained on local geometry patches, it generalizes to arbitrary models. A geometry model is first encoded into a sparse voxel octree of features learned by a network, and this model representation can then be decoded by another network in-turn for the intended task. We utilize the network for adaptive super-sampling in ray-tracing, to predict super-sampling patterns when seeing coarse-scale geometry. We discuss and evaluate the proposed network design, and demonstrate that the decoder network is compact and can be integrated seamlessly into on-chip ray-tracing kernels. We compare the results to previous screen-space super-sampling strategies as well as non-network-based world-space approaches.
{"title":"Learning Generic Local Shape Properties for Adaptive Super-Sampling","authors":"Christian Reinbold, R. Westermann","doi":"10.2312/egs.20221032","DOIUrl":"https://doi.org/10.2312/egs.20221032","url":null,"abstract":"We propose a novel encoder/decoder-based neural network architecture that learns view-dependent shape and appearance of geometry represented by voxel representations. Since the network is trained on local geometry patches, it generalizes to arbitrary models. A geometry model is first encoded into a sparse voxel octree of features learned by a network, and this model representation can then be decoded by another network in-turn for the intended task. We utilize the network for adaptive super-sampling in ray-tracing, to predict super-sampling patterns when seeing coarse-scale geometry. We discuss and evaluate the proposed network design, and demonstrate that the decoder network is compact and can be integrated seamlessly into on-chip ray-tracing kernels. We compare the results to previous screen-space super-sampling strategies as well as non-network-based world-space approaches.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"115 1","pages":"57-60"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77902195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}