Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG08/265-268
Antonio Martínez-Albalá, Juan José Jiménez-Delgado, Francisco R. Feito-Higueruela, R. J. Segura
La complejidad inherente al proceso de interacción entre objetos hace necesaria la búsqueda de soluciones eficientes que reduzcan su complejidad. Un enfoque es la utilización de estructuras jerárquicas de niveles de detalle para pre-procesar el modelo y optimizar la detección de colisión, dentro de este campo una alternativa es la utilización de tetra-trees. Los tetra-trees son árboles de tetra-conos que actúan como superficie envolvente simplista de la malla original y permiten una disminución de la carga computacional en la detección de la colisión. Este enfoque de estructura jerárquica simplista con niveles de detalle se ha aplicado en la interacción humano-escena 3D con dispositivos específicos como son los hápticos. La utilización de hápticos permite evaluar en condiciones reales las características de la estructura desarrollada.
{"title":"Simplificación de Mallas con Tetra-trees aplicado a Entornos de Interacción con Dispositivos Hápticos","authors":"Antonio Martínez-Albalá, Juan José Jiménez-Delgado, Francisco R. Feito-Higueruela, R. J. Segura","doi":"10.2312/LocalChapterEvents/CEIG/CEIG08/265-268","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG08/265-268","url":null,"abstract":"La complejidad inherente al proceso de interacción entre objetos hace necesaria la búsqueda de soluciones eficientes que reduzcan su complejidad. Un enfoque es la utilización de estructuras jerárquicas de niveles de detalle para pre-procesar el modelo y optimizar la detección de colisión, dentro de este campo una alternativa es la utilización de tetra-trees. Los tetra-trees son árboles de tetra-conos que actúan como superficie envolvente simplista de la malla original y permiten una disminución de la carga computacional en la detección de la colisión. Este enfoque de estructura jerárquica simplista con niveles de detalle se ha aplicado en la interacción humano-escena 3D con dispositivos específicos como son los hápticos. La utilización de hápticos permite evaluar en condiciones reales las características de la estructura desarrollada.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114526723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. C. Trinidad, C. Andújar, C. Bosch, A. Chica, Imanol Muñoz-Pandiella
Laser scanners enable the digitization of 3D surfaces by generating a point cloud where each point sample includes an intensity (infrared reflectivity) value. Some LiDAR scanners also incorporate cameras to capture the color of the surfaces visible from the scanner location. Getting usable colors everywhere across 360◦scans is a challenging task, especially for indoor scenes. LiDAR scanners lack flashes, and placing proper light sources for a 360◦indoor scene is either unfeasible or undesirable. As a result, color data from LiDAR scans often do not have an adequate quality, either because of poor exposition (too bright or too dark areas) or because of severe illumination changes between scans (e.g. direct Sunlight vs cloudy lighting). In this paper, we present a new method to recover plausible color data from the infrared data available in LiDAR scans. The main idea is to train an adapted image-to-image translation network using color and intensity values on well-exposed areas of scans. At inference time, the network is able to recover plausible color using exclusively the intensity values. The immediate application of our approach is the selective colorization of LiDAR data in those scans or regions with missing or poor color data.
{"title":"Neural Colorization of Laser Scans","authors":"M. C. Trinidad, C. Andújar, C. Bosch, A. Chica, Imanol Muñoz-Pandiella","doi":"10.2312/ceig.20211356","DOIUrl":"https://doi.org/10.2312/ceig.20211356","url":null,"abstract":"Laser scanners enable the digitization of 3D surfaces by generating a point cloud where each point sample includes an intensity (infrared reflectivity) value. Some LiDAR scanners also incorporate cameras to capture the color of the surfaces visible from the scanner location. Getting usable colors everywhere across 360◦scans is a challenging task, especially for indoor scenes. LiDAR scanners lack flashes, and placing proper light sources for a 360◦indoor scene is either unfeasible or undesirable. As a result, color data from LiDAR scans often do not have an adequate quality, either because of poor exposition (too bright or too dark areas) or because of severe illumination changes between scans (e.g. direct Sunlight vs cloudy lighting). In this paper, we present a new method to recover plausible color data from the infrared data available in LiDAR scans. The main idea is to train an adapted image-to-image translation network using color and intensity values on well-exposed areas of scans. At inference time, the network is able to recover plausible color using exclusively the intensity values. The immediate application of our approach is the selective colorization of LiDAR data in those scans or regions with missing or poor color data.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114863428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural environments are a very important part of virtual worlds, both for video games and for simulators, but their manual creation can be a very expensive job. The procedural creation of these, allows to generate them easily and quickly, although there is no way to control the final result quickly and accurately. The purpose of this work is to present a new method of creation, that allows the procedural generation of a natural environment for applications such as rail shooter. The vegetation of the natural environment will be placed automatically along the area of the route. The method presented, is based on establishing on the ground a grid of points which are assigned a random function of probability of appearance of each species based on Perlin noise. In addition, the method draws from the heightmap the values necessary to distribute the natural elements. These values are combined along the distance of the route and next to a noise distribution, thus obtaining placement patterns that have a greater probability of occurrence in favorable points of the map and near the route. The results show that the method allows the procedural generation of these environments for any heightmap, also focusing the realism and the placement of the natural elements in the user visualization zone. CCS Concepts Software and its engineering → Virtual worlds training simulations;
{"title":"Procedural Generation of Natural Environments with Restrictions","authors":"C. Gasch, M. Chover, I. Remolar","doi":"10.2312/CEIG.20171219","DOIUrl":"https://doi.org/10.2312/CEIG.20171219","url":null,"abstract":"Natural environments are a very important part of virtual worlds, both for video games and for simulators, but their manual creation can be a very expensive job. The procedural creation of these, allows to generate them easily and quickly, although there is no way to control the final result quickly and accurately. The purpose of this work is to present a new method of creation, that allows the procedural generation of a natural environment for applications such as rail shooter. The vegetation of the natural environment will be placed automatically along the area of the route. The method presented, is based on establishing on the ground a grid of points which are assigned a random function of probability of appearance of each species based on Perlin noise. In addition, the method draws from the heightmap the values necessary to distribute the natural elements. These values are combined along the distance of the route and next to a noise distribution, thus obtaining placement patterns that have a greater probability of occurrence in favorable points of the map and near the route. The results show that the method allows the procedural generation of these environments for any heightmap, also focusing the realism and the placement of the natural elements in the user visualization zone. CCS Concepts Software and its engineering → Virtual worlds training simulations;","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130021494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG09/047-055
Ibon Eskudero, Jairo R. Sánchez, Carlos Buchart, Alex García-Alonso, Diego Borro
El tracking 3D basado en imágenes se resuelve habitualmente usando restricciones geométricas o mediante algoritmos estocásticos basados en filtros de estados. La primera opción es rápida pero poco robusta. La segunda es robusta pero menos eficiente. En este trabajo se mejora un método de tracking 3D estocástico basado en el filtro de partículas y se adapta sobre la GPU consiguiendo funcionar en tiempo real. Además, se demuestra experimentalmente su validez utilizando secuencias reales de vídeo.
{"title":"Tracking 3D en GPU Basado en el Filtro de Partículas","authors":"Ibon Eskudero, Jairo R. Sánchez, Carlos Buchart, Alex García-Alonso, Diego Borro","doi":"10.2312/LocalChapterEvents/CEIG/CEIG09/047-055","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG09/047-055","url":null,"abstract":"El tracking 3D basado en imágenes se resuelve habitualmente usando restricciones geométricas o mediante algoritmos estocásticos basados en filtros de estados. La primera opción es rápida pero poco robusta. La segunda es robusta pero menos eficiente. En este trabajo se mejora un método de tracking 3D estocástico basado en el filtro de partículas y se adapta sobre la GPU consiguiendo funcionar en tiempo real. Además, se demuestra experimentalmente su validez utilizando secuencias reales de vídeo.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134241907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG12/166-166
A. Gurguí, F. Poveda, E. Martí
3D facial mesh registration is a key step in 3D facial analisys. This process calculates a mapping between faces in order to put in correspondence each point of both faces. In this poster we introduce a new approach for 3D face registration based in geodesical distances and 2D non-rigid registration.
{"title":"Non-rigid 3D Faces Registration using Geodesic Distance Maps","authors":"A. Gurguí, F. Poveda, E. Martí","doi":"10.2312/LocalChapterEvents/CEIG/CEIG12/166-166","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG12/166-166","url":null,"abstract":"3D facial mesh registration is a key step in 3D facial analisys. This process calculates a mapping between faces in order to put in correspondence each point of both faces. In this poster we introduce a new approach for 3D face registration based in geodesical distances and 2D non-rigid registration.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131652622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stippling is an artistic technique that has been used profusely in antiquity. One of the main problems is that it requires great skill and patience to achieve excellent results due to the large number of points that must be drawn even for small formats. The use of computers, in general, and GPUs, in particular, with their computing capacity, has allowed to overcome many of these limits. We present a real-time GPU stippling program that combines the advantages of positioning based on Weighted Centroidal Voronoi Diagrams and the realistic aspect of the scanned points. CCS Concepts • Computer graphics → Non-photorealistic rendering;
{"title":"Fast Stippling based on Weighted Centroidal Voronoi Diagrams","authors":"Eila Gómez, E. Mendez, G. Arroyo, Domingo Martín","doi":"10.2312/CEIG.20171214","DOIUrl":"https://doi.org/10.2312/CEIG.20171214","url":null,"abstract":"Stippling is an artistic technique that has been used profusely in antiquity. One of the main problems is that it requires great skill and patience to achieve excellent results due to the large number of points that must be drawn even for small formats. The use of computers, in general, and GPUs, in particular, with their computing capacity, has allowed to overcome many of these limits. We present a real-time GPU stippling program that combines the advantages of positioning based on Weighted Centroidal Voronoi Diagrams and the realistic aspect of the scanned points. CCS Concepts • Computer graphics → Non-photorealistic rendering;","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131816935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG08/179-187
Jorge López-Moreno, A. Cabanes, D. Gutierrez
Light transport inside participating media, like fog or water, involves complex interaction phenomena, which make traditional 3D rendering approaches challenging and computationally expensive. To circumvent this, we propose an image-based method which adds perceptually plausible participating media effects to a single, clean high dynamic range image. We impose no prior requirements on the input image, and show that the underconstrained nature of the problem (where variables like depth or reflectance properties of the objects are obviously unknown) can be overcome with relatively little unskilled user input, similar to other image-editing techniques. We additionally validate the visual correctness of the results by means of psychophysical tests.
{"title":"Image-based Participating Media","authors":"Jorge López-Moreno, A. Cabanes, D. Gutierrez","doi":"10.2312/LocalChapterEvents/CEIG/CEIG08/179-187","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG08/179-187","url":null,"abstract":"Light transport inside participating media, like fog or water, involves complex interaction phenomena, which make traditional 3D rendering approaches challenging and computationally expensive. To circumvent this, we propose an image-based method which adds perceptually plausible participating media effects to a single, clean high dynamic range image. We impose no prior requirements on the input image, and show that the underconstrained nature of the problem (where variables like depth or reflectance properties of the objects are obviously unknown) can be overcome with relatively little unskilled user input, similar to other image-editing techniques. We additionally validate the visual correctness of the results by means of psychophysical tests.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131188718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge López-Moreno, G. Cirio, D. Miraut, M. Otaduy
Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub-surface scattering. Previous work represents yarns as a sequence of identical but rotated cross-sections. While these approaches are able to produce very realistic illumination models, the required volumetric representation is difficult to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for simultaneous visualization and voxelization, suitable for both interactive and offline rendering.Ourmethod can interactively voxelize millions of polygons into a 3D texture, generating a volume with sub-voxel accuracy which is suitable even for high-density weaving such as linen.
{"title":"GPU Visualization and Voxelization of Yarn-Level Cloth","authors":"Jorge López-Moreno, G. Cirio, D. Miraut, M. Otaduy","doi":"10.2312/ceig.20141115","DOIUrl":"https://doi.org/10.2312/ceig.20141115","url":null,"abstract":"Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub-surface scattering. Previous work represents yarns as a sequence of identical but rotated cross-sections. While these approaches are able to produce very realistic illumination models, the required volumetric representation is difficult to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for simultaneous visualization and voxelization, suitable for both interactive and offline rendering.Ourmethod can interactively voxelize millions of polygons into a 3D texture, generating a volume with sub-voxel accuracy which is suitable even for high-density weaving such as linen.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133334862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG12/155-155
Manuel García Sánchez, Alejandro León, J. Torres
{"title":"Tracking for Virtual Environment using Kinect","authors":"Manuel García Sánchez, Alejandro León, J. Torres","doi":"10.2312/LocalChapterEvents/CEIG/CEIG12/155-155","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG12/155-155","url":null,"abstract":"","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG09/227-230
J. Ramos, C. González-Morcillo, David Vallejo-Fernandez, L. M. López-López
www.eg.org diglib.eg.org Abstract Yafrid-NG is a peer to peer architecture and a set of protocols which reduces the time spent in the rendering phase by making use of a set of heterogeneous computers, distributed over the Internet, which will supply some of their resources for rendering 3D scenes. The system is completely decentralized and it takes advantage of p2p networks. The set of protocols needed for transferring files, the rendering process, and the results recovery and composition are also defined. Yafrid-NG is specifically designed for physically based rendering methods and the division of the work is optimized for that. We make use of a mechanism based on the scene properties to balance the complexity of the work units. Experimental results are presented to illustrate the benefits of using Yafrid-NG.
{"title":"Yafrid-NG: A Peer to peer Architecture for Physically Based Rendering","authors":"J. Ramos, C. González-Morcillo, David Vallejo-Fernandez, L. M. López-López","doi":"10.2312/LocalChapterEvents/CEIG/CEIG09/227-230","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG09/227-230","url":null,"abstract":"www.eg.org diglib.eg.org Abstract Yafrid-NG is a peer to peer architecture and a set of protocols which reduces the time spent in the rendering phase by making use of a set of heterogeneous computers, distributed over the Internet, which will supply some of their resources for rendering 3D scenes. The system is completely decentralized and it takes advantage of p2p networks. The set of protocols needed for transferring files, the rendering process, and the results recovery and composition are also defined. Yafrid-NG is specifically designed for physically based rendering methods and the division of the work is optimized for that. We make use of a mechanism based on the scene properties to balance the complexity of the work units. Experimental results are presented to illustrate the benefits of using Yafrid-NG.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134466454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}