Virtual navigation should be as similar as possible to how we move in the real world, however the limitations of hardware and physical space make this a challenging problem. Tracking natural walk is only feasible when the dimensions of the virtual environment match those of the real world. The problem of most navigation techniques is that they produce motion sickness because the optical flow observed does not match the vestibular and proprioceptive information that appears during real physical movement. Walk in place is a technique that can successfully reduce motion sickness without losing presence in the virtual environment. It is suitable for navigating in a very large virtual environment but it is not usually needed in small virtual spaces. Most current work focuses on one specific navigation metaphor, however in our experience we have observed that if users are given the possibility to use walk in place for large distances, they tend to switch to normal walk when they are in a confined virtual area (such as a small room). Therefore, in this paper we present our ongoing work to seamlessly switch between two navigation metaphors based on leg and head tracking to achieve a more intuitive and natural virtual navigation.
{"title":"Smooth Transitioning Between two Walking Metaphors for Virtual Reality Applications","authors":"I. Salvetti, Alex Rios, N. Pelechano","doi":"10.2312/CEIG.20191212","DOIUrl":"https://doi.org/10.2312/CEIG.20191212","url":null,"abstract":"Virtual navigation should be as similar as possible to how we move in the real world, however the limitations of hardware and physical space make this a challenging problem. Tracking natural walk is only feasible when the dimensions of the virtual environment match those of the real world. The problem of most navigation techniques is that they produce motion sickness because the optical flow observed does not match the vestibular and proprioceptive information that appears during real physical movement. Walk in place is a technique that can successfully reduce motion sickness without losing presence in the virtual environment. It is suitable for navigating in a very large virtual environment but it is not usually needed in small virtual spaces. Most current work focuses on one specific navigation metaphor, however in our experience we have observed that if users are given the possibility to use walk in place for large distances, they tend to switch to normal walk when they are in a confined virtual area (such as a small room). Therefore, in this paper we present our ongoing work to seamlessly switch between two navigation metaphors based on leg and head tracking to achieve a more intuitive and natural virtual navigation.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132735350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG09/001-010
N. Pelechano, B. Spanlang, A. Beacco
Real-time crowd simulation for virtual environment applications requires not only navigation and locomotion in large environments while avoiding obstacles and agents, but also rendering high quality 3D fully articulated figures to enhance realism. In this paper, we present a framework for real-time simulation of crowds. The framework is composed of a Hardware Accelerated Character Animation Library (HALCA), a crowd simulation system that can handle large crowds with high densities (HiDAC), and an Animation Planning Mediator (APM) that bridges the gap between the global position of the agents given by HiDAC and the correct skeletal state so that each agent is rendered with natural locomotion in real-time. The main goal of this framework is to allow high quality visualization and animation of several hundred realistic looking characters (about 5000 polygons each) navigating virtual environments on a single display PC, a HMD (Head Mounted Display), or a CAVE system. Results of several applications on a number of platforms are presented.
{"title":"A Framework for Rendering, Simulation and Animation of Crowds","authors":"N. Pelechano, B. Spanlang, A. Beacco","doi":"10.2312/LocalChapterEvents/CEIG/CEIG09/001-010","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG09/001-010","url":null,"abstract":"Real-time crowd simulation for virtual environment applications requires not only navigation and locomotion in large environments while avoiding obstacles and agents, but also rendering high quality 3D fully articulated figures to enhance realism. In this paper, we present a framework for real-time simulation of crowds. The framework is composed of a Hardware Accelerated Character Animation Library (HALCA), a crowd simulation system that can handle large crowds with high densities (HiDAC), and an Animation Planning Mediator (APM) that bridges the gap between the global position of the agents given by HiDAC and the correct skeletal state so that each agent is rendered with natural locomotion in real-time. The main goal of this framework is to allow high quality visualization and animation of several hundred realistic looking characters (about 5000 polygons each) navigating virtual environments on a single display PC, a HMD \u0000(Head Mounted Display), or a CAVE system. Results of several applications on a number of platforms are presented.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128621873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG09/231-234
John Congote, Javier Barandiarán, I. Barandiaran, O. Ruiz
Real-time depth extraction from stereo images is an important process in computer vision. This paper proposes a new implementation of the dynamic programming algorithm to calculate dense depth maps using the CUDA architecture achieving real-time performance with consumer graphics cards. We compare the running time of the algorithm against CPU implementation and demonstrate the scalability property of the algorithm by testing it on different graphics cards.
{"title":"Realtime Dense Stereo Matching with Dynamic Programming in CUDA","authors":"John Congote, Javier Barandiarán, I. Barandiaran, O. Ruiz","doi":"10.2312/LocalChapterEvents/CEIG/CEIG09/231-234","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG09/231-234","url":null,"abstract":"Real-time depth extraction from stereo images is an important process in computer vision. This paper proposes a new implementation of the dynamic programming algorithm to calculate dense depth maps using the CUDA architecture achieving real-time performance with consumer graphics cards. We compare the running time of the algorithm against CPU implementation and demonstrate the scalability property of the algorithm by testing it on different graphics cards.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133383509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG09/161-167
Jorge López-Moreno, Sunil Hadap, E. Reinhard, D. Gutierrez
Common tasks related to image processing or augmented reality include rendering new objects into existing images, or matching objects with unknown illumination. To facilitate such algorithms, it is often necessary to infer from which directions a scene was illuminated, even if only a photograph is available. For this purpose, we present a novel light source detection algorithm that, contrary to the current state-of-the-art, is able to detect multiple light sources with sufficient accuracy. 3D measures are not required, only the input image and a very small amount of unskilled user interaction.
{"title":"Light Source Detection in Photographs","authors":"Jorge López-Moreno, Sunil Hadap, E. Reinhard, D. Gutierrez","doi":"10.2312/LocalChapterEvents/CEIG/CEIG09/161-167","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG09/161-167","url":null,"abstract":"Common tasks related to image processing or augmented reality include rendering new objects into existing images, or matching objects with unknown illumination. To facilitate such algorithms, it is often necessary to infer from which directions a scene was illuminated, even if only a photograph is available. For this purpose, we present a novel light source detection algorithm that, contrary to the current state-of-the-art, is able to detect multiple light sources with sufficient accuracy. 3D measures are not required, only the input image and a very small amount of unskilled user interaction.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133996824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Rojo, Laura Raya, M. Rubio-Sánchez, Alberto Sánchez
Visual representation of information remains a key part of exploratory data analysis. This is due to the high number of features in datasets and their increasing complexity, together with users’ ability to visually understand information. One of the most common operations in exploratory data analysis is the selection of relevant features in the available data. In multidimensional scenarios, this task is often done with the help of automatic dimensionality reduction algorithms from the machine learning field. In this paper we develop a visual interface where users are integrated into the feature selection process of several machine learning algorithms. Users can work interactively with the algorithms in order to explore the data, compare the results and make the appropriate decisions about the feature selection process. CCS Concepts •Human-centered computing → Visual analytics; Visualization systems and tools; •Computing methodologies → Feature selection;
{"title":"A Visual Interface for Feature Subset Selection Using Machine Learning Methods","authors":"Diego Rojo, Laura Raya, M. Rubio-Sánchez, Alberto Sánchez","doi":"10.2312/CEIG.20181165","DOIUrl":"https://doi.org/10.2312/CEIG.20181165","url":null,"abstract":"Visual representation of information remains a key part of exploratory data analysis. This is due to the high number of features in datasets and their increasing complexity, together with users’ ability to visually understand information. One of the most common operations in exploratory data analysis is the selection of relevant features in the available data. In multidimensional scenarios, this task is often done with the help of automatic dimensionality reduction algorithms from the machine learning field. In this paper we develop a visual interface where users are integrated into the feature selection process of several machine learning algorithms. Users can work interactively with the algorithms in order to explore the data, compare the results and make the appropriate decisions about the feature selection process. CCS Concepts •Human-centered computing → Visual analytics; Visualization systems and tools; •Computing methodologies → Feature selection;","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122400689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Saiz, Garazi Alfaro, I. Barandiaran, Sara García, M. P. Carretero, M. Graña
Automated visual inspection is an ongoing machine vision challenge for industry. Faced with increasingly demanding quality standards it is reasonable to address the transition from a manual inspection system to an automatic one using some advanced machine learning approaches such as deep learning models. However, the introduction of neural models in environments such as the manufacturing industry find certain impairments or limitations. Indeed, due to the harsh conditions of manufacturing environments, there is usually the limitation of collecting a high quality database for training neural models. Also, the imbalance between non-defective and defective samples is very common issue in this type of scenarios. To alleviate these problems, this work proposes a pipeline to generate rendered images from CAD models of industrial components, to subsequently feed an anomaly detection model based on Deep Learning. Our approach can simulate the potential geometric and photometric transformations in which the parts could be presented to a real camera to faithfully reproduce the image acquisition behavior of an automatic inspection system. We evaluated the accuracy of several neural models trained with different synthetically generated data set simulating different transformations such as part temperature or part position and orientation with respect to a given camera. The results shows the feasibility of the proposed approach during the design and evaluation process of the image acquisition setup and to guarantee the success of the real future application. CCS Concepts • Computing methodologies → Quality Inspection; Industrial Manufacturing; Photo-realistic Rendering; CAD Models; Anomaly Detection; Deep Learning; Generative Adversarial Networks;
{"title":"Synthetic Data Set Generation for the Evaluation of Image Acquisition Strategies Applied to Deep Learning Based Industrial Component Inspection Systems","authors":"F. Saiz, Garazi Alfaro, I. Barandiaran, Sara García, M. P. Carretero, M. Graña","doi":"10.2312/ceig.20211355","DOIUrl":"https://doi.org/10.2312/ceig.20211355","url":null,"abstract":"Automated visual inspection is an ongoing machine vision challenge for industry. Faced with increasingly demanding quality standards it is reasonable to address the transition from a manual inspection system to an automatic one using some advanced machine learning approaches such as deep learning models. However, the introduction of neural models in environments such as the manufacturing industry find certain impairments or limitations. Indeed, due to the harsh conditions of manufacturing environments, there is usually the limitation of collecting a high quality database for training neural models. Also, the imbalance between non-defective and defective samples is very common issue in this type of scenarios. To alleviate these problems, this work proposes a pipeline to generate rendered images from CAD models of industrial components, to subsequently feed an anomaly detection model based on Deep Learning. Our approach can simulate the potential geometric and photometric transformations in which the parts could be presented to a real camera to faithfully reproduce the image acquisition behavior of an automatic inspection system. We evaluated the accuracy of several neural models trained with different synthetically generated data set simulating different transformations such as part temperature or part position and orientation with respect to a given camera. The results shows the feasibility of the proposed approach during the design and evaluation process of the image acquisition setup and to guarantee the success of the real future application. CCS Concepts • Computing methodologies → Quality Inspection; Industrial Manufacturing; Photo-realistic Rendering; CAD Models; Anomaly Detection; Deep Learning; Generative Adversarial Networks;","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124530133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tonny Ruiz-Gijón, Marcos Gutiérrez Cubells, Borja Holgado, Hugo Salais López, M. Vidal-González, Anna García Forner
Tanto el paleoarte como las nuevas tecnologías suponen herramientas muy útiles y visualmente atractivas para la representación de la vida extinta. En este proyecto, ambas disciplinas han sido combinadas en el desarrollo de una aplicación móvil que permite, mediante el uso de un casco de realidad virtual, asistir a la reconstrucción del pterosaurio Tropeognathus mesembrinus, paseando a través de tres niveles anatómicos: el esqueleto, la musculatura y el aspecto externo. La reconstrucción culmina con una escena de Tropeognathus sobrevolando un paisaje del Cretácico Inferior. Además, incluye un visor 3D por el que se puede navegar por la anatomía del pterosaurio, así como una opción de poder mostrarlo en realidad aumentada al encontrar el logo del museo. En conclusión, esta aplicación supone una forma novedosa y atractiva de exponer una pieza paleontológica al público general, que podrá familiarizarse no sólo con Tropeognathus y los pterosaurios, sino también con el proceso de reconstrucción de la vida extinta. CCS Concepts • Applied → Virtual Reality, Augmented Reality;
{"title":"PterosaVR MUVHN: una aplicación para la reconstrucción virtual de Tropeognathus mesembrinus","authors":"Tonny Ruiz-Gijón, Marcos Gutiérrez Cubells, Borja Holgado, Hugo Salais López, M. Vidal-González, Anna García Forner","doi":"10.2312/ceig.20181159","DOIUrl":"https://doi.org/10.2312/ceig.20181159","url":null,"abstract":"Tanto el paleoarte como las nuevas tecnologías suponen herramientas muy útiles y visualmente atractivas para la representación de la vida extinta. En este proyecto, ambas disciplinas han sido combinadas en el desarrollo de una aplicación móvil que permite, mediante el uso de un casco de realidad virtual, asistir a la reconstrucción del pterosaurio Tropeognathus mesembrinus, paseando a través de tres niveles anatómicos: el esqueleto, la musculatura y el aspecto externo. La reconstrucción culmina con una escena de Tropeognathus sobrevolando un paisaje del Cretácico Inferior. Además, incluye un visor 3D por el que se puede navegar por la anatomía del pterosaurio, así como una opción de poder mostrarlo en realidad aumentada al encontrar el logo del museo. En conclusión, esta aplicación supone una forma novedosa y atractiva de exponer una pieza paleontológica al público general, que podrá familiarizarse no sólo con Tropeognathus y los pterosaurios, sino también con el proceso de reconstrucción de la vida extinta. CCS Concepts • Applied → Virtual Reality, Augmented Reality;","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128008873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG08/249-252
David Pérez, M. C. J. Lizandra
En este artículo presentamos dos sistemas de RA para el tratamiento de la acrofobia. El primero de ellos utiliza fotos navegables como elementos virtuales. En el segundo las sensaciones acrofóbicas se producen simulando que repentinamente: se abre un agujero en el suelo o se suben las paredes. Para comprobar la sensación de presencia y grado de ansiedad producidos por estos sistemas, se han realizado dos estudios comparativos. En el primero de ellos, se ha comparado el primer sistema de RA (foto navegable) con el mismo entorno real. En el segundo, se ha comparado el segundo sistema de RA con un sistema similar de RV. Los resultados han demostrado que la RA produce suficiente sensación de presencia y ansiedad en usuarios sin fobia. Por consiguiente, a falta de hacer pruebas con pacientes reales, nos inclinamos a pensar que este tipo de sistemas puede ser una alternativa a la RV para terapia.
{"title":"Dos Sistemas de Realidad Aumentada para el Tratamiento de la Acrofobia","authors":"David Pérez, M. C. J. Lizandra","doi":"10.2312/LocalChapterEvents/CEIG/CEIG08/249-252","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG08/249-252","url":null,"abstract":"En este artículo presentamos dos sistemas de RA para el tratamiento de la acrofobia. El primero de ellos utiliza fotos navegables como elementos virtuales. En el segundo las sensaciones acrofóbicas se producen simulando que repentinamente: se abre un agujero en el suelo o se suben las paredes. Para comprobar la sensación de presencia y grado de ansiedad producidos por estos sistemas, se han realizado dos estudios comparativos. En el primero de ellos, se ha comparado el primer sistema de RA (foto navegable) con el mismo entorno real. En el segundo, se ha comparado el segundo sistema de RA con un sistema similar de RV. Los resultados han demostrado que la RA produce suficiente sensación de presencia y ansiedad en usuarios sin fobia. Por consiguiente, a falta de hacer pruebas con pacientes reales, nos inclinamos a pensar que este tipo de sistemas puede ser una alternativa a la RV para terapia.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128420568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.2312/LocalChapterEvents/CEIG/CEIG09/135-144
María Dolores Robles-Ortega, José López Expósito, Lidia M. Ortega-Alvarado, Francisco R. Feito-Higueruela
La representación virtual de entornos reales en 3 D plantea ciertos problemas de texturización cuando dichos escenarios son de grandes dimensiones. Concretamente, el modelado de un entorno urbano con miles de edificios puede suponer un enorme reto si debe respetarse la información real del plano de la ciudad con manzanas, alturas de edificios reales, así como nombres de calles u otro tipo de información relevante. El objetivo final es posibilitar la navegación libre por dicho entorno virtual ofreciendo al usuario la mayor sensación de realismo posible, independientemente del lugar exacto por donde decida moverse en la ciudad, y exista o no información de interés en dicha zona. A pesar de partir de toda la información posible disponible en un SIG urbano, éste no suele almacenar datos relativos al aspecto real de los inmuebles, lo cual resulta imprescindible para realizar un levantamiento realista de la ciudad completa. En este trabajo se propone una solución alternativa, aplicando texturas de forma automática a los edificios de los cuales no se tenga información exacta de su posible aspecto. Para ello se emplean dos algoritmos genéticos para asignar automáticamente texturas
{"title":"Texturización automática en Entornos urbanos utilizando Algoritmos Genéticos","authors":"María Dolores Robles-Ortega, José López Expósito, Lidia M. Ortega-Alvarado, Francisco R. Feito-Higueruela","doi":"10.2312/LocalChapterEvents/CEIG/CEIG09/135-144","DOIUrl":"https://doi.org/10.2312/LocalChapterEvents/CEIG/CEIG09/135-144","url":null,"abstract":"La representación virtual de entornos reales en 3 D plantea ciertos problemas de texturización cuando dichos escenarios son de grandes dimensiones. Concretamente, el modelado de un entorno urbano con miles de edificios puede suponer un enorme reto si debe respetarse la información real del plano de la ciudad con manzanas, alturas de edificios reales, así como nombres de calles u otro tipo de información relevante. El objetivo final es posibilitar la navegación libre por dicho entorno virtual ofreciendo al usuario la mayor sensación de realismo posible, independientemente del lugar exacto por donde decida moverse en la ciudad, y exista o no información de interés en dicha zona. A pesar de partir de toda la información posible disponible en un SIG urbano, éste no suele almacenar datos relativos al aspecto real de los inmuebles, lo cual resulta imprescindible para realizar un levantamiento realista de la ciudad completa. En este trabajo se propone una solución alternativa, aplicando texturas de forma automática a los edificios de los cuales no se tenga información exacta de su posible aspecto. Para ello se emplean dos algoritmos genéticos para asignar automáticamente texturas","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114164746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Andújar, O. Argudo, I. Besora, P. Brunet, A. Chica, Marc Comino
properties surface images from we present a simple method for detecting, classifying and filling non-valid data regions in depth maps produced by dense stereo algorithms. Triangles meshes reconstructed from our repaired depth maps exhibit much higher quality than those produced by state-of-the-art reconstruction algorithms like Screened Poisson-based techniques.
{"title":"Depth Map Repairing for Building Reconstruction","authors":"C. Andújar, O. Argudo, I. Besora, P. Brunet, A. Chica, Marc Comino","doi":"10.2312/ceig.20181162","DOIUrl":"https://doi.org/10.2312/ceig.20181162","url":null,"abstract":"properties surface images from we present a simple method for detecting, classifying and filling non-valid data regions in depth maps produced by dense stereo algorithms. Triangles meshes reconstructed from our repaired depth maps exhibit much higher quality than those produced by state-of-the-art reconstruction algorithms like Screened Poisson-based techniques.","PeriodicalId":385751,"journal":{"name":"Spanish Computer Graphics Conference","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125398300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}