We present a new feature for AR/VR applications for consumer mobile devices equipped with video camera (e.g., smartphone). Direct or indirect scale estimation of scene or objects is necessary for realistic rendering of virtual objects in real-world environment. Standard approaches usually rely on 3D vision with sensor fusion (e.g., visual SLAM), or pattern recognition (e.g., using AR markers, reference object learning), and suffer from various limitations. Here, we argue that combining inertial measurements and visual cues, the problem reduces to a 1D parameter estimation representing distance from device to floor. In particular, we discuss robust solutions to solve absolute scale estimation problem for indoor environments.
{"title":"Mobile virtual interior stylization from scale estimation","authors":"Shintaro Murakami, T. Mukasa, Tony Tung","doi":"10.1145/2945078.2945092","DOIUrl":"https://doi.org/10.1145/2945078.2945092","url":null,"abstract":"We present a new feature for AR/VR applications for consumer mobile devices equipped with video camera (e.g., smartphone). Direct or indirect scale estimation of scene or objects is necessary for realistic rendering of virtual objects in real-world environment. Standard approaches usually rely on 3D vision with sensor fusion (e.g., visual SLAM), or pattern recognition (e.g., using AR markers, reference object learning), and suffer from various limitations. Here, we argue that combining inertial measurements and visual cues, the problem reduces to a 1D parameter estimation representing distance from device to floor. In particular, we discuss robust solutions to solve absolute scale estimation problem for indoor environments.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122087738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional research on pedestrian navigation systems has explored the possibilities of presenting information to users both visually and aurally. Existing navigation systems require users to recognize information, and then to follow directions as separate, conscious processes, which inevitably require attention to the system. This study proposes a novel method that enables pedestrians to be guided without conscious interaction with a navigational system.
{"title":"Graphical manipulation of human's walking direction with visual illusion","authors":"Akira Ishii, Ippei Suzuki, Shinji Sakamoto, Keita Kanai, Kazuki Takazawa, Hiraku Doi, Yoichi Ochiai","doi":"10.1145/2945078.2945087","DOIUrl":"https://doi.org/10.1145/2945078.2945087","url":null,"abstract":"Conventional research on pedestrian navigation systems has explored the possibilities of presenting information to users both visually and aurally. Existing navigation systems require users to recognize information, and then to follow directions as separate, conscious processes, which inevitably require attention to the system. This study proposes a novel method that enables pedestrians to be guided without conscious interaction with a navigational system.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115187155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kwamina Edum-Fotwe, P. Shepherd, Matthew Brown, Dan Harper, Richard Dinnis
This simple paper describes an intuitive data-driven approach to reconstructing architectural facade models from unstructured point-clouds. The algorithm presented yields sparse semantically-rich models that are better suited to interactive simulation than the equivalent dense-reconstructions, yet executes significantly faster than the prevalent sparse-operators. The key advantages include accuracy, efficiency and the ability to model irregular windows.
{"title":"Fast, accurate and sparse, automatic facade reconstruction from unstructured ground laser-scans","authors":"Kwamina Edum-Fotwe, P. Shepherd, Matthew Brown, Dan Harper, Richard Dinnis","doi":"10.1145/2945078.2945123","DOIUrl":"https://doi.org/10.1145/2945078.2945123","url":null,"abstract":"This simple paper describes an intuitive data-driven approach to reconstructing architectural facade models from unstructured point-clouds. The algorithm presented yields sparse semantically-rich models that are better suited to interactive simulation than the equivalent dense-reconstructions, yet executes significantly faster than the prevalent sparse-operators. The key advantages include accuracy, efficiency and the ability to model irregular windows.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123153014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lode Jorissen, Patrik Goorts, G. Lafruit, P. Bekaert
In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.
近年来,人们对从一组有限的输入摄像机生成虚拟视图越来越感兴趣。这对于自由视点导航和光场显示等应用尤其有用[Tanimoto 2015]。后者通常需要数十到数百个输入视图,而使用那么多摄像机进行记录通常是不可行的。视图插值算法通常遍历一组深度来找到输入图像之间的对应关系[Stankiewicz et al. 2013;Goorts et al. 2013]。大多数算法选择一组统一的深度来遍历(如图2(a)所示),但这通常会导致在没有对象所在的区域中进行过多不必要的计算。它还会导致不匹配的数量增加,从而导致生成的视图不准确。当选择的深度范围太大时也会出现这些问题。因此,通常手动选择一个紧密包围场景的深度范围来减轻这些错误。如图2(b)所示,在场景中围绕物体组织深度层的深度分布将减少这些错误,并通过减少搜索深度的数量来减少计算次数。[Goorts et al. 2013]通过重用从前一个时间戳生成的深度信息来确定非均匀的全球深度分布。这使得算法依赖于先前的结果。
{"title":"Nonuniform depth distribution selection with discrete Fourier transform","authors":"Lode Jorissen, Patrik Goorts, G. Lafruit, P. Bekaert","doi":"10.1145/2945078.2945133","DOIUrl":"https://doi.org/10.1145/2945078.2945133","url":null,"abstract":"In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123326120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a method for adding visual details to fluid animation while reducing noisy appearances. In grid-based fluid simulations, an issue is that while highly detailed fluids with small eddies can be obtained by increasing the number of grid cells, it costs much more computational time. To address this, various methods for adding details (or up-scaling resolutions) have been proposed. Those methods can generate fine animations quickly by adding high-frequency noises or external forces to coarse simulation results. However, those methods typically generate tiny eddies on a whole surface of fluid and the result appears too noisy. In this paper, we consider the distribution of kinetic energy in the spatial frequency domain and then apply it to two existing methods for adding details. By using our method, noises or external forces can be added to the appropriate positions of fluids and consequently natural-looking details can be achieved.
{"title":"Adding visual details based on low-resolution energy cascade ratios for smoke simulation","authors":"Masato Ishimuroya, T. Kanai","doi":"10.1145/2945078.2945094","DOIUrl":"https://doi.org/10.1145/2945078.2945094","url":null,"abstract":"We propose a method for adding visual details to fluid animation while reducing noisy appearances. In grid-based fluid simulations, an issue is that while highly detailed fluids with small eddies can be obtained by increasing the number of grid cells, it costs much more computational time. To address this, various methods for adding details (or up-scaling resolutions) have been proposed. Those methods can generate fine animations quickly by adding high-frequency noises or external forces to coarse simulation results. However, those methods typically generate tiny eddies on a whole surface of fluid and the result appears too noisy. In this paper, we consider the distribution of kinetic energy in the spatial frequency domain and then apply it to two existing methods for adding details. By using our method, noises or external forces can be added to the appropriate positions of fluids and consequently natural-looking details can be achieved.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129158423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markov Chain Monte Carlo (MCMC) has recently received a lot of attention in light transport simulation research [Hanika et al. 2015; Hachisuka et al. 2014]. While these methods aim at high quality sampling of local extremes of the path space (so called local exploration), the other issue - discovering these extremes - has been so far neglected. Poor global exploration results in oversampling some parts of the paths space, while undersampling or completely missing other parts (see Fig. 1). Such behavior of MCMC-based light transport algorithms limits their use in practice, since we can never tell for sure whether the image has already converged.
马尔可夫链蒙特卡罗(Markov Chain Monte Carlo, MCMC)最近在轻输运模拟研究中受到了很多关注[Hanika et al. 2015;Hachisuka et al. 2014]。虽然这些方法旨在对路径空间的局部极值进行高质量采样(即所谓的局部探索),但另一个问题-发现这些极值-迄今为止一直被忽视。较差的全局勘探导致路径空间的某些部分过采样,而其他部分则欠采样或完全缺失(见图1)。基于mcmc的光传输算法的这种行为限制了它们在实践中的使用,因为我们永远无法确定图像是否已经收敛。
{"title":"Improving global exploration of MCMC light transport simulation","authors":"M. Sik, Jaroslav Křivánek","doi":"10.1145/2945078.2945128","DOIUrl":"https://doi.org/10.1145/2945078.2945128","url":null,"abstract":"Markov Chain Monte Carlo (MCMC) has recently received a lot of attention in light transport simulation research [Hanika et al. 2015; Hachisuka et al. 2014]. While these methods aim at high quality sampling of local extremes of the path space (so called local exploration), the other issue - discovering these extremes - has been so far neglected. Poor global exploration results in oversampling some parts of the paths space, while undersampling or completely missing other parts (see Fig. 1). Such behavior of MCMC-based light transport algorithms limits their use in practice, since we can never tell for sure whether the image has already converged.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125878226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To create realistic CG images, the information about the lighting is very important. There are two ways to estimate the information of the light source. One is a direct measurement method using images captured with a fish-eye lens or a spherical mirror[Debevec 1998], and the other is an indirect measurement method to estimate positions and intensities of the light sources from the shadow information of objects[Sato et al. 2003]. In the direct measurement method, by concerning pixels of the captured image as light sources having corresponding intensities, it is possible to estimate the lighting environment densely. However, for a high-intensity light source like the sun, the dynamic range of the camera is insufficient, and the radiant intensity of the light source cannot be accurately estimated. So, we propose a method that combines a direct measurement technique and an indirect measurement method. In our proposed method, the light source information of the high-intensity area in the captured image is estimated by indirect measurement method. In the experiments using real images, even for outdoor scenes that contain the high-intensity light source like the sun, the measurement of the light source environment could be performed by the proposed method. Also, it was confirmed that images including realistic shadows equivalent to real images could be created.
为了创建逼真的CG图像,关于照明的信息非常重要。估计光源信息的方法有两种。一种是利用鱼眼镜头或球面镜捕获图像的直接测量方法[Debevec 1998],另一种是利用物体阴影信息估计光源位置和强度的间接测量方法[Sato et al. 2003]。在直接测量法中,通过将捕获图像的像素作为具有相应强度的光源,可以密集地估计照明环境。然而,对于太阳这样的高强度光源,相机的动态范围不足,无法准确估计光源的辐射强度。因此,我们提出了一种直接测量技术和间接测量技术相结合的方法。该方法采用间接测量的方法对捕获图像中高强度区域的光源信息进行估计。在使用真实图像的实验中,即使是太阳等含有高强度光源的户外场景,也可以使用该方法进行光源环境的测量。另外,还确认了可以制作出与真实影像相当的逼真阴影的影像。
{"title":"Estimating lighting environments based on shadow area in an omni-directional image","authors":"Masashi Baba, Kesuke Haruta, S. Hiura","doi":"10.1145/2945078.2945166","DOIUrl":"https://doi.org/10.1145/2945078.2945166","url":null,"abstract":"To create realistic CG images, the information about the lighting is very important. There are two ways to estimate the information of the light source. One is a direct measurement method using images captured with a fish-eye lens or a spherical mirror[Debevec 1998], and the other is an indirect measurement method to estimate positions and intensities of the light sources from the shadow information of objects[Sato et al. 2003]. In the direct measurement method, by concerning pixels of the captured image as light sources having corresponding intensities, it is possible to estimate the lighting environment densely. However, for a high-intensity light source like the sun, the dynamic range of the camera is insufficient, and the radiant intensity of the light source cannot be accurately estimated. So, we propose a method that combines a direct measurement technique and an indirect measurement method. In our proposed method, the light source information of the high-intensity area in the captured image is estimated by indirect measurement method. In the experiments using real images, even for outdoor scenes that contain the high-intensity light source like the sun, the measurement of the light source environment could be performed by the proposed method. Also, it was confirmed that images including realistic shadows equivalent to real images could be created.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125150640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an interactive system to assist novices in the design and construction of original necklaces. The system consists of two design tools, an interactive drag-and-drop design tool using images of pearls and a design selection tool with an interactive evolutionary computation (IEC) system. The system includes a virtual modeling simulation which allows users to superimpose a necklace design over their own photograph, taken with a web camera. The system also provides a customized construction guide to assist the user with the construction process. We conduct a field trial to demonstrate that non-professional users can design original necklaces using our system.
{"title":"An interactive system for original necklace design","authors":"Y. Igarashi, Tsubasa Hiyama, K. Arakawa","doi":"10.1145/2945078.2945082","DOIUrl":"https://doi.org/10.1145/2945078.2945082","url":null,"abstract":"We propose an interactive system to assist novices in the design and construction of original necklaces. The system consists of two design tools, an interactive drag-and-drop design tool using images of pearls and a design selection tool with an interactive evolutionary computation (IEC) system. The system includes a virtual modeling simulation which allows users to superimpose a necklace design over their own photograph, taken with a web camera. The system also provides a customized construction guide to assist the user with the construction process. We conduct a field trial to demonstrate that non-professional users can design original necklaces using our system.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128504064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By using the head-mounted display (HMD), we can have an immersive virtual reality experience. But the user cannot see any information from the real world. To solve the problem, video seethrough HMD can acquire images from real environment, and present into the HMD, then, we could build a mixed reality (MR) or augmented reality (AR) system. However, how to append and calibrate cameras on HMD for recovering real environment is still a research issue. HTC VIVE has a single camera in front of its device. [Steptoe et al. 2014] and OVRVISION Pro proposed to append dual cameras to capture left and right images. Due to the difference of viewpoint, images captured by cameras are different to what human eyes see (figure 2). Although we could recover true 3D information with a depth map, there are still some occlusion areas that we cannot recover by single camera. Therefore, multiple cameras with different positions could complement each other for reducing occlusion areas. In this work, four configurations are simulated with a synthesized scene.
通过使用头戴式显示器(HMD),我们可以获得身临其境的虚拟现实体验。但是用户无法看到来自现实世界的任何信息。为了解决这个问题,视频可视头戴设备可以从真实环境中获取图像,并呈现到头戴设备中,然后我们可以构建一个混合现实(MR)或增强现实(AR)系统。然而,如何在头戴式显示器上附加和校准相机以恢复真实环境仍然是一个研究问题。HTC VIVE的设备前只有一个摄像头。[Steptoe et al. 2014]和OVRVISION Pro提出附加双摄像头来捕捉左右图像。由于视点的不同,相机捕捉到的图像与人眼看到的图像是不同的(图2)。虽然我们可以通过深度图恢复真实的3D信息,但仍然有一些遮挡区域是单相机无法恢复的。因此,不同位置的多个摄像机可以相互补充,以减少遮挡区域。在这项工作中,用一个合成场景模拟了四种配置。
{"title":"View interpolation for video see-through head-mounted display","authors":"Chun-Jui Lai, Ping-Hsuan Han, Y. Hung","doi":"10.1145/2945078.2945135","DOIUrl":"https://doi.org/10.1145/2945078.2945135","url":null,"abstract":"By using the head-mounted display (HMD), we can have an immersive virtual reality experience. But the user cannot see any information from the real world. To solve the problem, video seethrough HMD can acquire images from real environment, and present into the HMD, then, we could build a mixed reality (MR) or augmented reality (AR) system. However, how to append and calibrate cameras on HMD for recovering real environment is still a research issue. HTC VIVE has a single camera in front of its device. [Steptoe et al. 2014] and OVRVISION Pro proposed to append dual cameras to capture left and right images. Due to the difference of viewpoint, images captured by cameras are different to what human eyes see (figure 2). Although we could recover true 3D information with a depth map, there are still some occlusion areas that we cannot recover by single camera. Therefore, multiple cameras with different positions could complement each other for reducing occlusion areas. In this work, four configurations are simulated with a synthesized scene.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115856010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seungbae Bang, Meekyoung Kim, Deok-Kyeong Jang, Sung-Hee Lee
Knee brace is a sports product or medical equipment that increases the stability in the dynamics of the knee. The proper design of a subject-specific knee brace should take her anatomical characteristics into account since they are influential to the knee dynamics. However, anatomical information is hidden under the skin, and obtaining such information is restricted to expensive equipments such as Magnetic Resonance Imaging (MRI) device or Computed Tomography (CT) scan device.
{"title":"Estimating skeleton from skin data for designing subject-specific knee braces","authors":"Seungbae Bang, Meekyoung Kim, Deok-Kyeong Jang, Sung-Hee Lee","doi":"10.1145/2945078.2945146","DOIUrl":"https://doi.org/10.1145/2945078.2945146","url":null,"abstract":"Knee brace is a sports product or medical equipment that increases the stability in the dynamics of the knee. The proper design of a subject-specific knee brace should take her anatomical characteristics into account since they are influential to the knee dynamics. However, anatomical information is hidden under the skin, and obtaining such information is restricted to expensive equipments such as Magnetic Resonance Imaging (MRI) device or Computed Tomography (CT) scan device.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"355 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116467271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}