Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584382
Juarez Paulino da Silva Júnior, D. Borges, F. Vidal
3D reconstruction from partial views is one of the best practices for obtaining 3D models. It provides feedback to developers of digitalizers regarding views acquisition accuracy, and its practice reduces the cost compared to full acquisition systems. One further benefit is a testbed and research tool for 3D alignment techniques. We propose a system, called Obvious, which is designed as a modular, open source C++ coding, portable to unix like or windows systems, with friendly interface and state of the art 3D alignment techniques such as 4PCS, D4PCS, and ICP. The system also brings modules for including and modifying noise and perturbation to 3D data to help evaluate alignment quality such as accuracy, robustness, and efficiency. As a 3D reconstruction and research system Obvious has algorithms and tools originally provided in one open source tool for 3D enthusiasts.
{"title":"Obvious: A system to explore alignment techniques and visualization of 3D views","authors":"Juarez Paulino da Silva Júnior, D. Borges, F. Vidal","doi":"10.1109/IC3D.2011.6584382","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584382","url":null,"abstract":"3D reconstruction from partial views is one of the best practices for obtaining 3D models. It provides feedback to developers of digitalizers regarding views acquisition accuracy, and its practice reduces the cost compared to full acquisition systems. One further benefit is a testbed and research tool for 3D alignment techniques. We propose a system, called Obvious, which is designed as a modular, open source C++ coding, portable to unix like or windows systems, with friendly interface and state of the art 3D alignment techniques such as 4PCS, D4PCS, and ICP. The system also brings modules for including and modifying noise and perturbation to 3D data to help evaluate alignment quality such as accuracy, robustness, and efficiency. As a 3D reconstruction and research system Obvious has algorithms and tools originally provided in one open source tool for 3D enthusiasts.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116365000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584374
Mian Ma, F. Xu, Yebin Liu
As the development of depth acquisition techniques, in recent years, depth cameras achieve to obtain depth maps in real time. In this work, by using a single depth camera, we propose a method to transfer 3D motion of a performer to another character (avatars or other human characters). We capture the motion of a performer by a Kinect camera, which outputs the 3D positions of 15 joints of the performer for each frame. Even though the joints positions are quite noisy and with errors in some frames, our method achieves to calculate joints' rotations with spatial and temporal consistency and transfer the motion to a target skeleton. By using a skinning technique, the captured skeleton motion drives the target 3D model which performs the same motion with the captured performer.
{"title":"Animation of 3D characters from single depth camera","authors":"Mian Ma, F. Xu, Yebin Liu","doi":"10.1109/IC3D.2011.6584374","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584374","url":null,"abstract":"As the development of depth acquisition techniques, in recent years, depth cameras achieve to obtain depth maps in real time. In this work, by using a single depth camera, we propose a method to transfer 3D motion of a performer to another character (avatars or other human characters). We capture the motion of a performer by a Kinect camera, which outputs the 3D positions of 15 joints of the performer for each frame. Even though the joints positions are quite noisy and with errors in some frames, our method achieves to calculate joints' rotations with spatial and temporal consistency and transfer the motion to a target skeleton. By using a skinning technique, the captured skeleton motion drives the target 3D model which performs the same motion with the captured performer.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125570986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584373
Hyon‐Gon Choo, Roger Blanco i Ribera, J. Choi, Jinwoong Kim
In this paper, a depth and texture imaging method using time-varying color structured lights is introduced. Three structured patterns are periodically projected on a scene and captured with our system to produce depth and texture images simultaneously. The color difference between three patterns is used to identify the projected pattern's color, and the texture image can be simultaneously obtained by adding the colors of the patterned images. The experimental results show that the proposed method is suitable for depth and texture imaging for static and slow moving objects.
{"title":"Depth and texture imaging using time-varying color structured lights","authors":"Hyon‐Gon Choo, Roger Blanco i Ribera, J. Choi, Jinwoong Kim","doi":"10.1109/IC3D.2011.6584373","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584373","url":null,"abstract":"In this paper, a depth and texture imaging method using time-varying color structured lights is introduced. Three structured patterns are periodically projected on a scene and captured with our system to produce depth and texture images simultaneously. The color difference between three patterns is used to identify the projected pattern's color, and the texture image can be simultaneously obtained by adding the colors of the patterned images. The experimental results show that the proposed method is suitable for depth and texture imaging for static and slow moving objects.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133972765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584379
Leonardo De-Maeztu, S. Mattoccia, A. Villanueva, R. Cabeza
Local stereo matching algorithms based on adapting-weights aggregation produce excellent results compared to other local methods. In particular, they produce more accurate results near disparity edges. This improvement is obtained thanks to the fact that the support for each pixel is accurately determined based on information such as colour or spatial distance. However, the computation of the support for each pixel results in computationally complex algorithms, especially when using large aggregation windows. Iterative aggregation schemes are a potential alternative to using large windows. In this paper we propose a novel iterative approach for adapting-weights aggregation which produces better results and out-performs most previous adapting-weights methods.
{"title":"Efficient aggregation via iterative block-based adapting support-weights","authors":"Leonardo De-Maeztu, S. Mattoccia, A. Villanueva, R. Cabeza","doi":"10.1109/IC3D.2011.6584379","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584379","url":null,"abstract":"Local stereo matching algorithms based on adapting-weights aggregation produce excellent results compared to other local methods. In particular, they produce more accurate results near disparity edges. This improvement is obtained thanks to the fact that the support for each pixel is accurately determined based on information such as colour or spatial distance. However, the computation of the support for each pixel results in computationally complex algorithms, especially when using large aggregation windows. Iterative aggregation schemes are a potential alternative to using large windows. In this paper we propose a novel iterative approach for adapting-weights aggregation which produces better results and out-performs most previous adapting-weights methods.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127953762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584388
Hyun-Jeong Yim, Hyoungjin Kwon, K. Yun, W. Cheong, N. Hur
This paper introduces a program-associated 3D NRT system that can provide high quality stereoscopic images via a terrestrial broadcasting network. A traditional DTV system is used for sending reference images in real-time and the ATSC NRT mechanism is extended for delivering additional images in advance of its use in the suggested system. For providing 3DTV services using a real-time stream and NRT content, this study proposes a time stamp based synchronization method that can provide frame level accurate synchronization.
{"title":"A study on a program-associated 3D NRT broadcasting system using a time stamp based synchronization method","authors":"Hyun-Jeong Yim, Hyoungjin Kwon, K. Yun, W. Cheong, N. Hur","doi":"10.1109/IC3D.2011.6584388","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584388","url":null,"abstract":"This paper introduces a program-associated 3D NRT system that can provide high quality stereoscopic images via a terrestrial broadcasting network. A traditional DTV system is used for sending reference images in real-time and the ATSC NRT mechanism is extended for delivering additional images in advance of its use in the suggested system. For providing 3DTV services using a real-time stream and NRT content, this study proposes a time stamp based synchronization method that can provide frame level accurate synchronization.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126572872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584395
J. Ilgner, M. Westhofen
Introduction: Since the 1950's the widespread use of endoscopes and microscopes has facilitated new operative techniques in almost all surgical specialties which target restoration of organ functions combined with minimally invasive access. A major disadvantage is that these procedures are difficult to monitor and teach. Digital image retrieval in real-time, data processing and display technologies have enabled stereo HD monitoring in real time. This article focuses on the use of stereo video applications in microscopic surgery of oto-rhino-laryngologic procedures.
{"title":"Practical aspects on the use of stereoscopic applications in operative theatres","authors":"J. Ilgner, M. Westhofen","doi":"10.1109/IC3D.2011.6584395","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584395","url":null,"abstract":"Introduction: Since the 1950's the widespread use of endoscopes and microscopes has facilitated new operative techniques in almost all surgical specialties which target restoration of organ functions combined with minimally invasive access. A major disadvantage is that these procedures are difficult to monitor and teach. Digital image retrieval in real-time, data processing and display technologies have enabled stereo HD monitoring in real time. This article focuses on the use of stereo video applications in microscopic surgery of oto-rhino-laryngologic procedures.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131159706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584385
Akram Elkefi, Anis Meftah, M. Antonini, C. Amar
In this paper, we proposed two 3D mesh compression methods based on the “multidimensional multiscale parser”. The main idea of the first method is to transform the 3D object into a 2D image using the geometry image [3]. The second method is to project the wavelet transform of the object into a 2D image. The coding is processed using the MMP on these 2D images. At low bitrates, (from 0.3 to 1 bit/vertex) we have a better result in the order of 0.5 dB than the simple wavelet transform method [1]. Moreover, our method consists in processing the data progressively during acquisition while reducing considerably the memory. The problem of scan-based processing arises when compressing very large volumes of data using a minimum of memory resources. Knowing that the 3D meshes with a high degree of precision have sizes exceeding several million points, the difficulty of processing quickly arises related to this kind of data. With our scan-based method, we were able to reach levels memory even smaller than in the wavelet transform method (WT) of [1] with a better compression quality.
{"title":"Multidimensional multiscale parser compression of 3D meshes","authors":"Akram Elkefi, Anis Meftah, M. Antonini, C. Amar","doi":"10.1109/IC3D.2011.6584385","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584385","url":null,"abstract":"In this paper, we proposed two 3D mesh compression methods based on the “multidimensional multiscale parser”. The main idea of the first method is to transform the 3D object into a 2D image using the geometry image [3]. The second method is to project the wavelet transform of the object into a 2D image. The coding is processed using the MMP on these 2D images. At low bitrates, (from 0.3 to 1 bit/vertex) we have a better result in the order of 0.5 dB than the simple wavelet transform method [1]. Moreover, our method consists in processing the data progressively during acquisition while reducing considerably the memory. The problem of scan-based processing arises when compressing very large volumes of data using a minimum of memory resources. Knowing that the 3D meshes with a high degree of precision have sizes exceeding several million points, the difficulty of processing quickly arises related to this kind of data. With our scan-based method, we were able to reach levels memory even smaller than in the wavelet transform method (WT) of [1] with a better compression quality.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116119296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584380
Wei Hao, J. Suo, Qionghai Dai
In this paper, we design a real-time 2D to 3D conversion chip. Architecture of this chip includes three key modules: 2D to 3D conversion core, 3D format conversion and peripheral I/O circuit. Specially, calculation precision and smooth filter are carefully designed and implemented according to the features of chip, such that the quality of output 3D video is largely improved. The experiment results on the prototypes show that our chip can give promising conversion results and has a broad application foreground.
{"title":"A 2D-to-3D chip and its application to TV systems","authors":"Wei Hao, J. Suo, Qionghai Dai","doi":"10.1109/IC3D.2011.6584380","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584380","url":null,"abstract":"In this paper, we design a real-time 2D to 3D conversion chip. Architecture of this chip includes three key modules: 2D to 3D conversion core, 3D format conversion and peripheral I/O circuit. Specially, calculation precision and smooth filter are carefully designed and implemented according to the features of chip, such that the quality of output 3D video is largely improved. The experiment results on the prototypes show that our chip can give promising conversion results and has a broad application foreground.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133727874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584367
Arnaud Schenkel, Rudy Ercek, B. Penelle, N. Warzée, A. Hubrecht, Tanguy Saroléa
A project aims to rehabilitate the Brussels Park bunker. The objective is to transform a disused place in a high-tech center according to current construction standards. Thus it is essential to base the rehabilitation strategy on correct plans. The original plans, dating from 1939, have been partially retrieved. However, they are incomplete and do not consider the transformations made over time. A topographic survey has produced a preliminary plan, but it contains inconsistencies in both geometry and disposition of the different rooms. To obtain documentation and measurements, a survey was carried out using a 3D laser scanner. In addition to complete plans, this work accumulated necessary information for virtual visits. Different processings were applied on data to obtain detailed plans, confirming the inconsistencies identified in 2010. This paper includes the results obtained and highlights added values of 3D scanning for recovering plans of existing structures.
{"title":"Tridimensional laser scanning to retrieve engineering site drawings. The experience of the Brussels Park bunker rehabilitation project","authors":"Arnaud Schenkel, Rudy Ercek, B. Penelle, N. Warzée, A. Hubrecht, Tanguy Saroléa","doi":"10.1109/IC3D.2011.6584367","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584367","url":null,"abstract":"A project aims to rehabilitate the Brussels Park bunker. The objective is to transform a disused place in a high-tech center according to current construction standards. Thus it is essential to base the rehabilitation strategy on correct plans. The original plans, dating from 1939, have been partially retrieved. However, they are incomplete and do not consider the transformations made over time. A topographic survey has produced a preliminary plan, but it contains inconsistencies in both geometry and disposition of the different rooms. To obtain documentation and measurements, a survey was carried out using a 3D laser scanner. In addition to complete plans, this work accumulated necessary information for virtual visits. Different processings were applied on data to obtain detailed plans, confirming the inconsistencies identified in 2010. This paper includes the results obtained and highlights added values of 3D scanning for recovering plans of existing structures.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126994185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584386
Jonghyun Kim, Jisoo Hong, Jae-Hyun Jung, Byoungho Lee
We propose a novel frontal projection-type 3D display system using micro convex mirror array and relay optic to improve the depth expression capability. First, we make an analysis of our proposed system by ray optics. Then, we find the marginal depth plane of conventional frontal projection-type 3D display system by an experiment. Finally we perform an experiment with our proposed system to verify our idea.
{"title":"Frontal projection-type 3D display using micro convex mirror array and relay optic","authors":"Jonghyun Kim, Jisoo Hong, Jae-Hyun Jung, Byoungho Lee","doi":"10.1109/IC3D.2011.6584386","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584386","url":null,"abstract":"We propose a novel frontal projection-type 3D display system using micro convex mirror array and relay optic to improve the depth expression capability. First, we make an analysis of our proposed system by ray optics. Then, we find the marginal depth plane of conventional frontal projection-type 3D display system by an experiment. Finally we perform an experiment with our proposed system to verify our idea.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"602 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123205480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}