Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032604
M. Boutaayamou, C. Schwartz, V. Denoël, B. Forthomme, J. Croisier, G. Garraux, J. Verly, O. Brüls
A new signal processing algorithm is developed for quantifying heel strike (HS) and toe-off (TO) event times solely from measured heel and toe coordinates during overground walking. It is based on a rough estimation of relevant local 3D position signals. An original piecewise linear fitting method is applied to these local signals to accurately identify HS and TO times without the need of using arbitrary experimental coefficients. We validated the proposed method with nine healthy subjects and a total of 322 trials. The extracted temporal gait events were compared to reference data obtained from a force plate. HS and TO times were identified with a temporal accuracy ± precision of 0.3 ms ± 7.1 ms, and -2.8 ms ± 7.2 ms in comparison with reference data defined with a force threshold of 10 N. This algorithm improves the accuracy of the HS and TO detection. Furthermore, it can be used to perform stride-by-stride analysis during overground walking with only recorded heel and toe coordinates.
{"title":"Development and validation of a 3D kinematic-based method for determining gait events during overground walking","authors":"M. Boutaayamou, C. Schwartz, V. Denoël, B. Forthomme, J. Croisier, G. Garraux, J. Verly, O. Brüls","doi":"10.1109/IC3D.2014.7032604","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032604","url":null,"abstract":"A new signal processing algorithm is developed for quantifying heel strike (HS) and toe-off (TO) event times solely from measured heel and toe coordinates during overground walking. It is based on a rough estimation of relevant local 3D position signals. An original piecewise linear fitting method is applied to these local signals to accurately identify HS and TO times without the need of using arbitrary experimental coefficients. We validated the proposed method with nine healthy subjects and a total of 322 trials. The extracted temporal gait events were compared to reference data obtained from a force plate. HS and TO times were identified with a temporal accuracy ± precision of 0.3 ms ± 7.1 ms, and -2.8 ms ± 7.2 ms in comparison with reference data defined with a force threshold of 10 N. This algorithm improves the accuracy of the HS and TO detection. Furthermore, it can be used to perform stride-by-stride analysis during overground walking with only recorded heel and toe coordinates.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125290348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032575
A. Hast, Andrea Marchetti
An efficient and almost automatic method for stereo pair extraction of aerial photos is proposed. There are several challenging problems that needs to be taken into consideration when creating stereo pairs from historical aerial photos. These problems are discussed and solutions are proposed in order to obtain an almost automatic procedure with as little input as possible needed from the user. The result is a rectified and illumination corrected stereo pair. It will be discussed why viewing aerial photos in stereo is important since the depth cue gives more information than single photos do.
{"title":"Towards automatic stereo pair extraction for 3D visualisation of historical aerial photographs","authors":"A. Hast, Andrea Marchetti","doi":"10.1109/IC3D.2014.7032575","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032575","url":null,"abstract":"An efficient and almost automatic method for stereo pair extraction of aerial photos is proposed. There are several challenging problems that needs to be taken into consideration when creating stereo pairs from historical aerial photos. These problems are discussed and solutions are proposed in order to obtain an almost automatic procedure with as little input as possible needed from the user. The result is a rectified and illumination corrected stereo pair. It will be discussed why viewing aerial photos in stereo is important since the depth cue gives more information than single photos do.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131356818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032586
S. H. Kumar, K. Suraj, K. Ramakrishnan
This paper presents a computationally efficient method for estimation of high-quality depth for multiview video acquired by a camera array in motion. Depth information is essential for 3DTV display system for generating video streams from virtual view points. Dense depth estimation has been successfully modeled as a Markov Random Field, and several methods like Iterated Conditional Modes, Graph Cuts and Belief Propagation have been proposed to solve the same. While depth estimation using Graph Cuts and Belief Propagation give accurate results, their computational requirements are high. On the other hand, Iterated Conditional Modes is fast, but the quality of the result is poor. In our work, we propose a technique for boosting the quality of the resultant depth estimated using Iterated Conditional Modes to near Graph Cuts or Belief Propagation levels while keeping the computational cost low.
{"title":"An efficient depth estimation using temporal 3D-Warping","authors":"S. H. Kumar, K. Suraj, K. Ramakrishnan","doi":"10.1109/IC3D.2014.7032586","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032586","url":null,"abstract":"This paper presents a computationally efficient method for estimation of high-quality depth for multiview video acquired by a camera array in motion. Depth information is essential for 3DTV display system for generating video streams from virtual view points. Dense depth estimation has been successfully modeled as a Markov Random Field, and several methods like Iterated Conditional Modes, Graph Cuts and Belief Propagation have been proposed to solve the same. While depth estimation using Graph Cuts and Belief Propagation give accurate results, their computational requirements are high. On the other hand, Iterated Conditional Modes is fast, but the quality of the result is poor. In our work, we propose a technique for boosting the quality of the resultant depth estimated using Iterated Conditional Modes to near Graph Cuts or Belief Propagation levels while keeping the computational cost low.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121086243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032594
Mohamed H. Merzban, M. Abdellatif, A. Abouelsoud, ahmed. ali
Non-Perspective Three Point Pose (NP3P) problem is a generalization of the classical three point pose problem for the case of multi-camera that have no common projection center. In this paper, we develop a simple, minimal algebraic solution to the NP3P problem where the projection rays of three points may have arbitrary but known directions. This problem is known to have a maximum of eight solutions. The problem is formulated mathematically as the solution of three multivariate polynomials. The technique of Sylvester matrix resultants of two equations is used to obtain an eighth order polynomial that can be solved to yield the pose parameters. The accuracy and computational cost of the new method is compared to other methods reported in the literature and it was found to have comparable accuracy with less computational cost.
{"title":"A simple solution for the non perspective three point pose problem","authors":"Mohamed H. Merzban, M. Abdellatif, A. Abouelsoud, ahmed. ali","doi":"10.1109/IC3D.2014.7032594","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032594","url":null,"abstract":"Non-Perspective Three Point Pose (NP3P) problem is a generalization of the classical three point pose problem for the case of multi-camera that have no common projection center. In this paper, we develop a simple, minimal algebraic solution to the NP3P problem where the projection rays of three points may have arbitrary but known directions. This problem is known to have a maximum of eight solutions. The problem is formulated mathematically as the solution of three multivariate polynomials. The technique of Sylvester matrix resultants of two equations is used to obtain an eighth order polynomial that can be solved to yield the pose parameters. The accuracy and computational cost of the new method is compared to other methods reported in the literature and it was found to have comparable accuracy with less computational cost.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131347518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032590
Hirotsugu Yamamoto, M. Yasui, M. S. Alvissalim, Masashi Takahashi, Yuka Tomiyama, S. Suyama, M. Ishikawa
This paper presents an interaction system with a floating display screen. Aerial imaging by retro-reflection (AIRR) forms an aerial LED screen that is floating over tabletop and is visible well over 120 degrees. In order to reduce latency, our system employs a high-frame-rate LED display and high-speed stereoscopic cameras. Developed system enables users to interact with aerially displayed information spontaneously.
{"title":"Floating display screen formed by AIRR (Aerial imaging by retro-reflection) for interaction in 3D space","authors":"Hirotsugu Yamamoto, M. Yasui, M. S. Alvissalim, Masashi Takahashi, Yuka Tomiyama, S. Suyama, M. Ishikawa","doi":"10.1109/IC3D.2014.7032590","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032590","url":null,"abstract":"This paper presents an interaction system with a floating display screen. Aerial imaging by retro-reflection (AIRR) forms an aerial LED screen that is floating over tabletop and is visible well over 120 degrees. In order to reduce latency, our system employs a high-frame-rate LED display and high-speed stereoscopic cameras. Developed system enables users to interact with aerially displayed information spontaneously.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131311513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032593
A. Muscoloni, S. Mattoccia
People tracking is a crucial component of many intelligent video surveillance systems and recent developments in embedded computing architectures and algorithms allow us to design compact, lightweight and energy efficient systems aimed at tackling this problem. In particular, the advent of cheap RGBD sensing devices enables to exploit depth information as additional cue. In this paper we propose a 3D tracking system aimed to become the basic node of a distributed system for business analytics applications. In the envisioned distributed system each node would consist of a custom stereo camera with on-board FPGA processing coupled with a compact CPU based board. In the basic node proposed in this paper, aimed at raw people tracking within the sensed area of a single device, the custom stereo camera delivers, in real time and with minimal energy requirements, accurate dense depth maps according to state-of-the-art computer vision algorithms. Then, the CPU based system, by processing this information enables reliable 3D people tracking. In our system, deploying the FPGA front-end, the main constraint for real time 3D tracking is concerned with the computing requirement of the CPU based board and, in this paper, we propose a fast and effective node for 3D people tracking algorithm suited for implementation on embedded devices.
{"title":"Real-time tracking with an embedded 3D camera with FPGA processing","authors":"A. Muscoloni, S. Mattoccia","doi":"10.1109/IC3D.2014.7032593","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032593","url":null,"abstract":"People tracking is a crucial component of many intelligent video surveillance systems and recent developments in embedded computing architectures and algorithms allow us to design compact, lightweight and energy efficient systems aimed at tackling this problem. In particular, the advent of cheap RGBD sensing devices enables to exploit depth information as additional cue. In this paper we propose a 3D tracking system aimed to become the basic node of a distributed system for business analytics applications. In the envisioned distributed system each node would consist of a custom stereo camera with on-board FPGA processing coupled with a compact CPU based board. In the basic node proposed in this paper, aimed at raw people tracking within the sensed area of a single device, the custom stereo camera delivers, in real time and with minimal energy requirements, accurate dense depth maps according to state-of-the-art computer vision algorithms. Then, the CPU based system, by processing this information enables reliable 3D people tracking. In our system, deploying the FPGA front-end, the main constraint for real time 3D tracking is concerned with the computing requirement of the CPU based board and, in this paper, we propose a fast and effective node for 3D people tracking algorithm suited for implementation on embedded devices.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125813242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032597
S. Ebel, W. Waizenegger, M. Reinhardt, O. Schreer, I. Feldmann
The target application of this paper is 3D scene reconstruction for future real-time production scenarios in the broadcast domain as well as future post-production and on-set visual effect previews in the digital cinema area. Our approach is based on multiple trifocal camera capture systems which can be arbitrarily distributed on set. In this work we tackle the problem of multi-view data fusion from a real-time perspective. The novelty of our work is that instead of performing a pixel wise processing we consider patch groups as higher level scene representations. Based on the robust results of the trifocal sub-systems we implicitly obtain an optimized set of patch groups even for partly occluded regions by the application of a simple geometric rule set. Further on, we show that a simplified meshing can be applied to the patch group borders which enables a GPU centric real-time implementation. The presented algorithm is tested on real world test shoot data for the case of 3D reconstruction of humans.
{"title":"Visibility-driven patch group generation","authors":"S. Ebel, W. Waizenegger, M. Reinhardt, O. Schreer, I. Feldmann","doi":"10.1109/IC3D.2014.7032597","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032597","url":null,"abstract":"The target application of this paper is 3D scene reconstruction for future real-time production scenarios in the broadcast domain as well as future post-production and on-set visual effect previews in the digital cinema area. Our approach is based on multiple trifocal camera capture systems which can be arbitrarily distributed on set. In this work we tackle the problem of multi-view data fusion from a real-time perspective. The novelty of our work is that instead of performing a pixel wise processing we consider patch groups as higher level scene representations. Based on the robust results of the trifocal sub-systems we implicitly obtain an optimized set of patch groups even for partly occluded regions by the application of a simple geometric rule set. Further on, we show that a simplified meshing can be applied to the patch group borders which enables a GPU centric real-time implementation. The presented algorithm is tested on real world test shoot data for the case of 3D reconstruction of humans.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126005198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032592
Manuela Chessa, Matteo Garibotti, Guido Maiello, Lorenzo Caroggio, Huayi Huang, S. Sabatini, F. Solari
A novel approach to track the 3D position of the user's eyes in stereoscopic virtual environments, where stereo glasses are worn, is proposed. Such an approach improves a state-of-the-art real-time face tracking algorithm by addressing the occlusion due the stereo glasses and providing estimation of eye position based on biométrie features. More generally, our solution can be seen as a proof of concept for a more robust approach to improving motion tracking techniques. In particular, the proposed technique yields accurate and stable estimates of the 3D position of the user's eyes, while the user moves in front of the stereoscopic display. The correct tracking of both eyes' 3D position is a crucial step in order to achieve a more natural human-computer interaction which diminishes visual fatigue. The proposed approach is validated through quantitative tests: (i) we assessed the accuracy of our algorithm for tracking the 3D position of users' eyes with and without stereo glasses; (ii) we have performed a perceptual assessment of the natural interaction in the virtual environments through experimental sessions with several users.
{"title":"Detection of 3D position of eyes through a consumer RGB-D camera for stereoscopic mixed reality environments","authors":"Manuela Chessa, Matteo Garibotti, Guido Maiello, Lorenzo Caroggio, Huayi Huang, S. Sabatini, F. Solari","doi":"10.1109/IC3D.2014.7032592","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032592","url":null,"abstract":"A novel approach to track the 3D position of the user's eyes in stereoscopic virtual environments, where stereo glasses are worn, is proposed. Such an approach improves a state-of-the-art real-time face tracking algorithm by addressing the occlusion due the stereo glasses and providing estimation of eye position based on biométrie features. More generally, our solution can be seen as a proof of concept for a more robust approach to improving motion tracking techniques. In particular, the proposed technique yields accurate and stable estimates of the 3D position of the user's eyes, while the user moves in front of the stereoscopic display. The correct tracking of both eyes' 3D position is a crucial step in order to achieve a more natural human-computer interaction which diminishes visual fatigue. The proposed approach is validated through quantitative tests: (i) we assessed the accuracy of our algorithm for tracking the 3D position of users' eyes with and without stereo glasses; (ii) we have performed a perceptual assessment of the natural interaction in the virtual environments through experimental sessions with several users.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129801723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032574
R. Bartmann, Mathias Kuhlmey, Ronny Netzbandt, R. Barré
Ideal autostereoscopic display designs show a symmetrical light intensity distribution in the viewer's space. Real displays however always show some inhomogeneities. We investigated and simulated an autostereoscopic display with motion parallax barrier. Thereby our newly developed subpixel area model (SAM) served as a basis for barrier and intensity dependent viewing zone calculations. We introduce the implementation of our SAM approach for the simulation of the luminance and the content distribution in viewing distance. Furthermore, a special misalignment of the optical image splitter has been simulated and metrologically compared with the effect of a similar error in an assembled autostereoscopic display. In detail, the truncated shape of the luminous subpixel area under the image splitter, the misalignment and its result have been mathematically described.
{"title":"Validation of subpixel area based simulation for autostereoscopic displays with parallax barriers","authors":"R. Bartmann, Mathias Kuhlmey, Ronny Netzbandt, R. Barré","doi":"10.1109/IC3D.2014.7032574","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032574","url":null,"abstract":"Ideal autostereoscopic display designs show a symmetrical light intensity distribution in the viewer's space. Real displays however always show some inhomogeneities. We investigated and simulated an autostereoscopic display with motion parallax barrier. Thereby our newly developed subpixel area model (SAM) served as a basis for barrier and intensity dependent viewing zone calculations. We introduce the implementation of our SAM approach for the simulation of the luminance and the content distribution in viewing distance. Furthermore, a special misalignment of the optical image splitter has been simulated and metrologically compared with the effect of a similar error in an assembled autostereoscopic display. In detail, the truncated shape of the luminous subpixel area under the image splitter, the misalignment and its result have been mathematically described.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"46 Suppl 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131508712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/IC3D.2014.7032587
Paul Hands, A. Khushu, J. Read
The human visual system has the ability to use the size of familiar objects as a cue to the object's depth in the world. With the advancement of Stereoscopic 3D (S3D) displays, objects can now be displayed with differing size and binocular disparity cues to the depth of the object. We tested, for absolute and relative disparity cues, whether the familiar size or disparity cue was the preferred indication of depth. We found that, when only absolute disparity cues are available, the retinal size of a familiar object has a significant effect on its perceived depth, but with relative disparity the binocular disparity was a strong enough cue to depth that size was not a significant cue in determining the depth of the familiar object.
{"title":"Interaction between size and disparity cues in distance judgements","authors":"Paul Hands, A. Khushu, J. Read","doi":"10.1109/IC3D.2014.7032587","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032587","url":null,"abstract":"The human visual system has the ability to use the size of familiar objects as a cue to the object's depth in the world. With the advancement of Stereoscopic 3D (S3D) displays, objects can now be displayed with differing size and binocular disparity cues to the depth of the object. We tested, for absolute and relative disparity cues, whether the familiar size or disparity cue was the preferred indication of depth. We found that, when only absolute disparity cues are available, the retinal size of a familiar object has a significant effect on its perceived depth, but with relative disparity the binocular disparity was a strong enough cue to depth that size was not a significant cue in determining the depth of the familiar object.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133746536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}