首页 > 最新文献

ACM SIGGRAPH 2016 Posters最新文献

英文 中文
Mobile virtual interior stylization from scale estimation 从规模估计移动虚拟室内风格化
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945092
Shintaro Murakami, T. Mukasa, Tony Tung
We present a new feature for AR/VR applications for consumer mobile devices equipped with video camera (e.g., smartphone). Direct or indirect scale estimation of scene or objects is necessary for realistic rendering of virtual objects in real-world environment. Standard approaches usually rely on 3D vision with sensor fusion (e.g., visual SLAM), or pattern recognition (e.g., using AR markers, reference object learning), and suffer from various limitations. Here, we argue that combining inertial measurements and visual cues, the problem reduces to a 1D parameter estimation representing distance from device to floor. In particular, we discuss robust solutions to solve absolute scale estimation problem for indoor environments.
我们为AR/VR应用提供了一个新功能,用于配备摄像机的消费者移动设备(例如智能手机)。对场景或物体进行直接或间接的尺度估计是现实环境中虚拟物体逼真渲染的必要条件。标准方法通常依赖于3D视觉与传感器融合(例如,视觉SLAM)或模式识别(例如,使用AR标记,参考对象学习),并且受到各种限制。在这里,我们认为结合惯性测量和视觉线索,问题可以简化为表示设备到地板距离的一维参数估计。特别地,我们讨论了解决室内环境绝对尺度估计问题的鲁棒解。
{"title":"Mobile virtual interior stylization from scale estimation","authors":"Shintaro Murakami, T. Mukasa, Tony Tung","doi":"10.1145/2945078.2945092","DOIUrl":"https://doi.org/10.1145/2945078.2945092","url":null,"abstract":"We present a new feature for AR/VR applications for consumer mobile devices equipped with video camera (e.g., smartphone). Direct or indirect scale estimation of scene or objects is necessary for realistic rendering of virtual objects in real-world environment. Standard approaches usually rely on 3D vision with sensor fusion (e.g., visual SLAM), or pattern recognition (e.g., using AR markers, reference object learning), and suffer from various limitations. Here, we argue that combining inertial measurements and visual cues, the problem reduces to a 1D parameter estimation representing distance from device to floor. In particular, we discuss robust solutions to solve absolute scale estimation problem for indoor environments.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122087738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Graphical manipulation of human's walking direction with visual illusion 用视觉错觉对人的行走方向进行图形化处理
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945087
Akira Ishii, Ippei Suzuki, Shinji Sakamoto, Keita Kanai, Kazuki Takazawa, Hiraku Doi, Yoichi Ochiai
Conventional research on pedestrian navigation systems has explored the possibilities of presenting information to users both visually and aurally. Existing navigation systems require users to recognize information, and then to follow directions as separate, conscious processes, which inevitably require attention to the system. This study proposes a novel method that enables pedestrians to be guided without conscious interaction with a navigational system.
对行人导航系统的传统研究已经探索了向用户提供视觉和听觉信息的可能性。现有的导航系统需要用户识别信息,然后按照独立的、有意识的过程来遵循方向,这不可避免地需要关注系统。这项研究提出了一种新的方法,使行人在没有与导航系统有意识交互的情况下被引导。
{"title":"Graphical manipulation of human's walking direction with visual illusion","authors":"Akira Ishii, Ippei Suzuki, Shinji Sakamoto, Keita Kanai, Kazuki Takazawa, Hiraku Doi, Yoichi Ochiai","doi":"10.1145/2945078.2945087","DOIUrl":"https://doi.org/10.1145/2945078.2945087","url":null,"abstract":"Conventional research on pedestrian navigation systems has explored the possibilities of presenting information to users both visually and aurally. Existing navigation systems require users to recognize information, and then to follow directions as separate, conscious processes, which inevitably require attention to the system. This study proposes a novel method that enables pedestrians to be guided without conscious interaction with a navigational system.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115187155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast, accurate and sparse, automatic facade reconstruction from unstructured ground laser-scans 快速,准确和稀疏,从非结构化地面激光扫描自动立面重建
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945123
Kwamina Edum-Fotwe, P. Shepherd, Matthew Brown, Dan Harper, Richard Dinnis
This simple paper describes an intuitive data-driven approach to reconstructing architectural facade models from unstructured point-clouds. The algorithm presented yields sparse semantically-rich models that are better suited to interactive simulation than the equivalent dense-reconstructions, yet executes significantly faster than the prevalent sparse-operators. The key advantages include accuracy, efficiency and the ability to model irregular windows.
这篇简单的文章描述了一种直观的数据驱动方法,从非结构化的点云重建建筑立面模型。该算法产生的稀疏语义丰富的模型比等效的密集重建更适合于交互式仿真,但执行速度明显快于流行的稀疏算子。其主要优点包括准确性、效率和对不规则窗口进行建模的能力。
{"title":"Fast, accurate and sparse, automatic facade reconstruction from unstructured ground laser-scans","authors":"Kwamina Edum-Fotwe, P. Shepherd, Matthew Brown, Dan Harper, Richard Dinnis","doi":"10.1145/2945078.2945123","DOIUrl":"https://doi.org/10.1145/2945078.2945123","url":null,"abstract":"This simple paper describes an intuitive data-driven approach to reconstructing architectural facade models from unstructured point-clouds. The algorithm presented yields sparse semantically-rich models that are better suited to interactive simulation than the equivalent dense-reconstructions, yet executes significantly faster than the prevalent sparse-operators. The key advantages include accuracy, efficiency and the ability to model irregular windows.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123153014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Nonuniform depth distribution selection with discrete Fourier transform 离散傅里叶变换的非均匀深度分布选择
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945133
Lode Jorissen, Patrik Goorts, G. Lafruit, P. Bekaert
In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.
近年来,人们对从一组有限的输入摄像机生成虚拟视图越来越感兴趣。这对于自由视点导航和光场显示等应用尤其有用[Tanimoto 2015]。后者通常需要数十到数百个输入视图,而使用那么多摄像机进行记录通常是不可行的。视图插值算法通常遍历一组深度来找到输入图像之间的对应关系[Stankiewicz et al. 2013;Goorts et al. 2013]。大多数算法选择一组统一的深度来遍历(如图2(a)所示),但这通常会导致在没有对象所在的区域中进行过多不必要的计算。它还会导致不匹配的数量增加,从而导致生成的视图不准确。当选择的深度范围太大时也会出现这些问题。因此,通常手动选择一个紧密包围场景的深度范围来减轻这些错误。如图2(b)所示,在场景中围绕物体组织深度层的深度分布将减少这些错误,并通过减少搜索深度的数量来减少计算次数。[Goorts et al. 2013]通过重用从前一个时间戳生成的深度信息来确定非均匀的全球深度分布。这使得算法依赖于先前的结果。
{"title":"Nonuniform depth distribution selection with discrete Fourier transform","authors":"Lode Jorissen, Patrik Goorts, G. Lafruit, P. Bekaert","doi":"10.1145/2945078.2945133","DOIUrl":"https://doi.org/10.1145/2945078.2945133","url":null,"abstract":"In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123326120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Adding visual details based on low-resolution energy cascade ratios for smoke simulation 根据低分辨率能量级联比为烟雾模拟增加视觉细节
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945094
Masato Ishimuroya, T. Kanai
We propose a method for adding visual details to fluid animation while reducing noisy appearances. In grid-based fluid simulations, an issue is that while highly detailed fluids with small eddies can be obtained by increasing the number of grid cells, it costs much more computational time. To address this, various methods for adding details (or up-scaling resolutions) have been proposed. Those methods can generate fine animations quickly by adding high-frequency noises or external forces to coarse simulation results. However, those methods typically generate tiny eddies on a whole surface of fluid and the result appears too noisy. In this paper, we consider the distribution of kinetic energy in the spatial frequency domain and then apply it to two existing methods for adding details. By using our method, noises or external forces can be added to the appropriate positions of fluids and consequently natural-looking details can be achieved.
我们提出了一种方法来增加视觉细节的流体动画,同时减少噪音的外观。在基于网格的流体模拟中,一个问题是,虽然可以通过增加网格单元的数量来获得具有小涡流的非常详细的流体,但它需要花费更多的计算时间。为了解决这个问题,已经提出了各种增加细节(或提高分辨率)的方法。这些方法通过在粗糙的仿真结果中加入高频噪声或外力,可以快速生成精细的动画。然而,这些方法通常会在整个流体表面产生微小的涡流,结果看起来太吵了。在本文中,我们考虑了动能在空间频域的分布,并将其应用于现有的两种添加细节的方法。通过使用我们的方法,可以将噪音或外力添加到流体的适当位置,从而实现自然外观的细节。
{"title":"Adding visual details based on low-resolution energy cascade ratios for smoke simulation","authors":"Masato Ishimuroya, T. Kanai","doi":"10.1145/2945078.2945094","DOIUrl":"https://doi.org/10.1145/2945078.2945094","url":null,"abstract":"We propose a method for adding visual details to fluid animation while reducing noisy appearances. In grid-based fluid simulations, an issue is that while highly detailed fluids with small eddies can be obtained by increasing the number of grid cells, it costs much more computational time. To address this, various methods for adding details (or up-scaling resolutions) have been proposed. Those methods can generate fine animations quickly by adding high-frequency noises or external forces to coarse simulation results. However, those methods typically generate tiny eddies on a whole surface of fluid and the result appears too noisy. In this paper, we consider the distribution of kinetic energy in the spatial frequency domain and then apply it to two existing methods for adding details. By using our method, noises or external forces can be added to the appropriate positions of fluids and consequently natural-looking details can be achieved.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129158423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving global exploration of MCMC light transport simulation 完善MCMC轻输运模拟的全球探索
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945128
M. Sik, Jaroslav Křivánek
Markov Chain Monte Carlo (MCMC) has recently received a lot of attention in light transport simulation research [Hanika et al. 2015; Hachisuka et al. 2014]. While these methods aim at high quality sampling of local extremes of the path space (so called local exploration), the other issue - discovering these extremes - has been so far neglected. Poor global exploration results in oversampling some parts of the paths space, while undersampling or completely missing other parts (see Fig. 1). Such behavior of MCMC-based light transport algorithms limits their use in practice, since we can never tell for sure whether the image has already converged.
马尔可夫链蒙特卡罗(Markov Chain Monte Carlo, MCMC)最近在轻输运模拟研究中受到了很多关注[Hanika et al. 2015;Hachisuka et al. 2014]。虽然这些方法旨在对路径空间的局部极值进行高质量采样(即所谓的局部探索),但另一个问题-发现这些极值-迄今为止一直被忽视。较差的全局勘探导致路径空间的某些部分过采样,而其他部分则欠采样或完全缺失(见图1)。基于mcmc的光传输算法的这种行为限制了它们在实践中的使用,因为我们永远无法确定图像是否已经收敛。
{"title":"Improving global exploration of MCMC light transport simulation","authors":"M. Sik, Jaroslav Křivánek","doi":"10.1145/2945078.2945128","DOIUrl":"https://doi.org/10.1145/2945078.2945128","url":null,"abstract":"Markov Chain Monte Carlo (MCMC) has recently received a lot of attention in light transport simulation research [Hanika et al. 2015; Hachisuka et al. 2014]. While these methods aim at high quality sampling of local extremes of the path space (so called local exploration), the other issue - discovering these extremes - has been so far neglected. Poor global exploration results in oversampling some parts of the paths space, while undersampling or completely missing other parts (see Fig. 1). Such behavior of MCMC-based light transport algorithms limits their use in practice, since we can never tell for sure whether the image has already converged.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125878226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Estimating lighting environments based on shadow area in an omni-directional image 基于全向图像阴影面积的光照环境估计
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945166
Masashi Baba, Kesuke Haruta, S. Hiura
To create realistic CG images, the information about the lighting is very important. There are two ways to estimate the information of the light source. One is a direct measurement method using images captured with a fish-eye lens or a spherical mirror[Debevec 1998], and the other is an indirect measurement method to estimate positions and intensities of the light sources from the shadow information of objects[Sato et al. 2003]. In the direct measurement method, by concerning pixels of the captured image as light sources having corresponding intensities, it is possible to estimate the lighting environment densely. However, for a high-intensity light source like the sun, the dynamic range of the camera is insufficient, and the radiant intensity of the light source cannot be accurately estimated. So, we propose a method that combines a direct measurement technique and an indirect measurement method. In our proposed method, the light source information of the high-intensity area in the captured image is estimated by indirect measurement method. In the experiments using real images, even for outdoor scenes that contain the high-intensity light source like the sun, the measurement of the light source environment could be performed by the proposed method. Also, it was confirmed that images including realistic shadows equivalent to real images could be created.
为了创建逼真的CG图像,关于照明的信息非常重要。估计光源信息的方法有两种。一种是利用鱼眼镜头或球面镜捕获图像的直接测量方法[Debevec 1998],另一种是利用物体阴影信息估计光源位置和强度的间接测量方法[Sato et al. 2003]。在直接测量法中,通过将捕获图像的像素作为具有相应强度的光源,可以密集地估计照明环境。然而,对于太阳这样的高强度光源,相机的动态范围不足,无法准确估计光源的辐射强度。因此,我们提出了一种直接测量技术和间接测量技术相结合的方法。该方法采用间接测量的方法对捕获图像中高强度区域的光源信息进行估计。在使用真实图像的实验中,即使是太阳等含有高强度光源的户外场景,也可以使用该方法进行光源环境的测量。另外,还确认了可以制作出与真实影像相当的逼真阴影的影像。
{"title":"Estimating lighting environments based on shadow area in an omni-directional image","authors":"Masashi Baba, Kesuke Haruta, S. Hiura","doi":"10.1145/2945078.2945166","DOIUrl":"https://doi.org/10.1145/2945078.2945166","url":null,"abstract":"To create realistic CG images, the information about the lighting is very important. There are two ways to estimate the information of the light source. One is a direct measurement method using images captured with a fish-eye lens or a spherical mirror[Debevec 1998], and the other is an indirect measurement method to estimate positions and intensities of the light sources from the shadow information of objects[Sato et al. 2003]. In the direct measurement method, by concerning pixels of the captured image as light sources having corresponding intensities, it is possible to estimate the lighting environment densely. However, for a high-intensity light source like the sun, the dynamic range of the camera is insufficient, and the radiant intensity of the light source cannot be accurately estimated. So, we propose a method that combines a direct measurement technique and an indirect measurement method. In our proposed method, the light source information of the high-intensity area in the captured image is estimated by indirect measurement method. In the experiments using real images, even for outdoor scenes that contain the high-intensity light source like the sun, the measurement of the light source environment could be performed by the proposed method. Also, it was confirmed that images including realistic shadows equivalent to real images could be created.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125150640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An interactive system for original necklace design 原创项链设计互动系统
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945082
Y. Igarashi, Tsubasa Hiyama, K. Arakawa
We propose an interactive system to assist novices in the design and construction of original necklaces. The system consists of two design tools, an interactive drag-and-drop design tool using images of pearls and a design selection tool with an interactive evolutionary computation (IEC) system. The system includes a virtual modeling simulation which allows users to superimpose a necklace design over their own photograph, taken with a web camera. The system also provides a customized construction guide to assist the user with the construction process. We conduct a field trial to demonstrate that non-professional users can design original necklaces using our system.
我们提出了一个互动系统,帮助新手设计和制作原创项链。该系统由两个设计工具组成,一个是使用珍珠图像的交互式拖放设计工具,一个是带有交互式进化计算(IEC)系统的设计选择工具。该系统包括一个虚拟模型模拟,允许用户将项链设计叠加在自己用网络相机拍摄的照片上。该系统还提供了一个定制的施工指南,以帮助用户在施工过程中。我们进行了现场试验,以证明非专业用户可以使用我们的系统设计原创项链。
{"title":"An interactive system for original necklace design","authors":"Y. Igarashi, Tsubasa Hiyama, K. Arakawa","doi":"10.1145/2945078.2945082","DOIUrl":"https://doi.org/10.1145/2945078.2945082","url":null,"abstract":"We propose an interactive system to assist novices in the design and construction of original necklaces. The system consists of two design tools, an interactive drag-and-drop design tool using images of pearls and a design selection tool with an interactive evolutionary computation (IEC) system. The system includes a virtual modeling simulation which allows users to superimpose a necklace design over their own photograph, taken with a web camera. The system also provides a customized construction guide to assist the user with the construction process. We conduct a field trial to demonstrate that non-professional users can design original necklaces using our system.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128504064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View interpolation for video see-through head-mounted display 查看插值视频透视头戴式显示器
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945135
Chun-Jui Lai, Ping-Hsuan Han, Y. Hung
By using the head-mounted display (HMD), we can have an immersive virtual reality experience. But the user cannot see any information from the real world. To solve the problem, video seethrough HMD can acquire images from real environment, and present into the HMD, then, we could build a mixed reality (MR) or augmented reality (AR) system. However, how to append and calibrate cameras on HMD for recovering real environment is still a research issue. HTC VIVE has a single camera in front of its device. [Steptoe et al. 2014] and OVRVISION Pro proposed to append dual cameras to capture left and right images. Due to the difference of viewpoint, images captured by cameras are different to what human eyes see (figure 2). Although we could recover true 3D information with a depth map, there are still some occlusion areas that we cannot recover by single camera. Therefore, multiple cameras with different positions could complement each other for reducing occlusion areas. In this work, four configurations are simulated with a synthesized scene.
通过使用头戴式显示器(HMD),我们可以获得身临其境的虚拟现实体验。但是用户无法看到来自现实世界的任何信息。为了解决这个问题,视频可视头戴设备可以从真实环境中获取图像,并呈现到头戴设备中,然后我们可以构建一个混合现实(MR)或增强现实(AR)系统。然而,如何在头戴式显示器上附加和校准相机以恢复真实环境仍然是一个研究问题。HTC VIVE的设备前只有一个摄像头。[Steptoe et al. 2014]和OVRVISION Pro提出附加双摄像头来捕捉左右图像。由于视点的不同,相机捕捉到的图像与人眼看到的图像是不同的(图2)。虽然我们可以通过深度图恢复真实的3D信息,但仍然有一些遮挡区域是单相机无法恢复的。因此,不同位置的多个摄像机可以相互补充,以减少遮挡区域。在这项工作中,用一个合成场景模拟了四种配置。
{"title":"View interpolation for video see-through head-mounted display","authors":"Chun-Jui Lai, Ping-Hsuan Han, Y. Hung","doi":"10.1145/2945078.2945135","DOIUrl":"https://doi.org/10.1145/2945078.2945135","url":null,"abstract":"By using the head-mounted display (HMD), we can have an immersive virtual reality experience. But the user cannot see any information from the real world. To solve the problem, video seethrough HMD can acquire images from real environment, and present into the HMD, then, we could build a mixed reality (MR) or augmented reality (AR) system. However, how to append and calibrate cameras on HMD for recovering real environment is still a research issue. HTC VIVE has a single camera in front of its device. [Steptoe et al. 2014] and OVRVISION Pro proposed to append dual cameras to capture left and right images. Due to the difference of viewpoint, images captured by cameras are different to what human eyes see (figure 2). Although we could recover true 3D information with a depth map, there are still some occlusion areas that we cannot recover by single camera. Therefore, multiple cameras with different positions could complement each other for reducing occlusion areas. In this work, four configurations are simulated with a synthesized scene.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115856010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Estimating skeleton from skin data for designing subject-specific knee braces 从皮肤数据估计骨骼,设计受试者特定的膝关节支架
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945146
Seungbae Bang, Meekyoung Kim, Deok-Kyeong Jang, Sung-Hee Lee
Knee brace is a sports product or medical equipment that increases the stability in the dynamics of the knee. The proper design of a subject-specific knee brace should take her anatomical characteristics into account since they are influential to the knee dynamics. However, anatomical information is hidden under the skin, and obtaining such information is restricted to expensive equipments such as Magnetic Resonance Imaging (MRI) device or Computed Tomography (CT) scan device.
护膝是一种运动产品或医疗设备,可以增加膝关节动态的稳定性。适当的设计一个特定的膝关节支架应该考虑到她的解剖特征,因为它们对膝关节动力学有影响。然而,解剖信息隐藏在皮肤下,并且这些信息的获取仅限于昂贵的设备,如磁共振成像(MRI)设备或计算机断层扫描(CT)设备。
{"title":"Estimating skeleton from skin data for designing subject-specific knee braces","authors":"Seungbae Bang, Meekyoung Kim, Deok-Kyeong Jang, Sung-Hee Lee","doi":"10.1145/2945078.2945146","DOIUrl":"https://doi.org/10.1145/2945078.2945146","url":null,"abstract":"Knee brace is a sports product or medical equipment that increases the stability in the dynamics of the knee. The proper design of a subject-specific knee brace should take her anatomical characteristics into account since they are influential to the knee dynamics. However, anatomical information is hidden under the skin, and obtaining such information is restricted to expensive equipments such as Magnetic Resonance Imaging (MRI) device or Computed Tomography (CT) scan device.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"355 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116467271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH 2016 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1