首页 > 最新文献

ACM SIGGRAPH 2016 Posters最新文献

英文 中文
Real-time 3D rendering using depth-based geometry reconstruction and view-dependent texture mapping 使用基于深度的几何重建和依赖于视图的纹理映射的实时3D渲染
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945162
Chih-Fan Chen, M. Bolas, Evan A. Suma
With the recent proliferation of high-fidelity head-mounted displays (HMDs), there is increasing demand for realistic 3D content that can be integrated into virtual reality environments. However, creating photorealistic models is not only difficult but also time consuming. A simpler alternative involves scanning objects in the real world and rendering their digitized counterpart in the virtual world. Capturing objects can be achieved by performing a 3D scan using widely available consumer-grade RGB-D cameras. This process involves reconstructing the geometric model from depth images generated using a structured light or time-of-flight sensor. The colormap is determined by fusing data from multiple color images captured during the scan. Existing methods compute the color of each vertex by averaging the colors from all these images. Blending colors in this manner creates low-fidelity models that appear blurry. (Figure 1 right). Furthermore, this approach also yields textures with fixed lighting that is baked on the model. This limitation becomes more apparent when viewed in head-tracked virtual reality, as the illumination (e.g. specular reflections) does not change appropriately based on the user's viewpoint.
随着最近高保真头戴式显示器(hmd)的普及,人们对可以集成到虚拟现实环境中的逼真3D内容的需求越来越大。然而,创建逼真的模型不仅困难而且耗时。一种更简单的替代方法是扫描现实世界中的物体,然后在虚拟世界中呈现它们的数字化对应物。捕获对象可以通过使用广泛使用的消费级RGB-D相机执行3D扫描来实现。这个过程包括从使用结构光或飞行时间传感器生成的深度图像重建几何模型。通过融合扫描期间捕获的多个彩色图像的数据来确定色图。现有的方法是通过平均所有这些图像的颜色来计算每个顶点的颜色。以这种方式混合颜色会产生低保真度的模型,看起来很模糊。(图1右)。此外,这种方法还产生了在模型上烘烤的具有固定照明的纹理。这种限制在头部跟踪的虚拟现实中变得更加明显,因为照明(例如镜面反射)不会根据用户的视点适当改变。
{"title":"Real-time 3D rendering using depth-based geometry reconstruction and view-dependent texture mapping","authors":"Chih-Fan Chen, M. Bolas, Evan A. Suma","doi":"10.1145/2945078.2945162","DOIUrl":"https://doi.org/10.1145/2945078.2945162","url":null,"abstract":"With the recent proliferation of high-fidelity head-mounted displays (HMDs), there is increasing demand for realistic 3D content that can be integrated into virtual reality environments. However, creating photorealistic models is not only difficult but also time consuming. A simpler alternative involves scanning objects in the real world and rendering their digitized counterpart in the virtual world. Capturing objects can be achieved by performing a 3D scan using widely available consumer-grade RGB-D cameras. This process involves reconstructing the geometric model from depth images generated using a structured light or time-of-flight sensor. The colormap is determined by fusing data from multiple color images captured during the scan. Existing methods compute the color of each vertex by averaging the colors from all these images. Blending colors in this manner creates low-fidelity models that appear blurry. (Figure 1 right). Furthermore, this approach also yields textures with fixed lighting that is baked on the model. This limitation becomes more apparent when viewed in head-tracked virtual reality, as the illumination (e.g. specular reflections) does not change appropriately based on the user's viewpoint.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122010113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
PlenoGap: panorama light field viewing for HMD with focusing on gazing point PlenoGap:聚焦于注视点的HMD全景光场观测
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945079
Fuko Takano, T. Koike
We propose a walk through imaging method for head-mounted display (HMD) named 'PlenoGap'. The method always displays a refocused image on a HMD. The refocused image is generated from a trimmed panorama light field image which is 360° cylindrical and always focused on center of HMD. In addition, we realized walkthrough experience by making some intermediate images between three panorama light field images. User can roam around small area using controller.
我们提出了一种头戴式显示器(HMD)的遍历成像方法,名为“PlenoGap”。该方法总是在HMD上显示重新聚焦的图像。重聚焦图像是由裁剪后的全景光场图像生成的,该全景光场图像为360°圆柱形,始终聚焦在HMD中心。此外,我们通过在三幅全景光场图像之间制作一些中间图像来实现演练体验。用户可以使用控制器在小范围内漫游。
{"title":"PlenoGap: panorama light field viewing for HMD with focusing on gazing point","authors":"Fuko Takano, T. Koike","doi":"10.1145/2945078.2945079","DOIUrl":"https://doi.org/10.1145/2945078.2945079","url":null,"abstract":"We propose a walk through imaging method for head-mounted display (HMD) named 'PlenoGap'. The method always displays a refocused image on a HMD. The refocused image is generated from a trimmed panorama light field image which is 360° cylindrical and always focused on center of HMD. In addition, we realized walkthrough experience by making some intermediate images between three panorama light field images. User can roam around small area using controller.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132625540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Haptic wheelchair 触觉轮椅
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945168
Mike Lambeta, Matt Dridger, Paul J. White, J. Janssen, A. Byagowi
Virtual reality aims to provide an immersive experience to a user, with the help of a virtual environment. This immersive experience requires two key components; one for capturing inputs from the real world, and the other for synthesizing real world outputs based on interactions with the virtual environment. However, a user in a real world environment experiences a greater set of feedback from real world inputs which relate directly to auditory, visual, and force feedback. As such, in a virtual environment, a dissociation is introduced between the user's inputs and the feedback from the virtual environment. This dissociation relates to the discomfort the user experiences with real world interaction. Our team has introduced a novel way of receiving synthesized feedback from the virtual environment through the use of a haptic wheelchair.
虚拟现实旨在借助虚拟环境为用户提供身临其境的体验。这种沉浸式体验需要两个关键元素;一个用于捕获来自真实世界的输入,另一个用于基于与虚拟环境的交互合成真实世界的输出。然而,现实世界中的用户会从现实世界的输入中获得更多反馈,这些反馈与听觉、视觉和力反馈直接相关。因此,在虚拟环境中,在用户的输入和来自虚拟环境的反馈之间引入了分离。这种分离与用户对现实世界交互的不适体验有关。我们的团队引入了一种新颖的方式,通过使用触觉轮椅从虚拟环境中接收综合反馈。
{"title":"Haptic wheelchair","authors":"Mike Lambeta, Matt Dridger, Paul J. White, J. Janssen, A. Byagowi","doi":"10.1145/2945078.2945168","DOIUrl":"https://doi.org/10.1145/2945078.2945168","url":null,"abstract":"Virtual reality aims to provide an immersive experience to a user, with the help of a virtual environment. This immersive experience requires two key components; one for capturing inputs from the real world, and the other for synthesizing real world outputs based on interactions with the virtual environment. However, a user in a real world environment experiences a greater set of feedback from real world inputs which relate directly to auditory, visual, and force feedback. As such, in a virtual environment, a dissociation is introduced between the user's inputs and the feedback from the virtual environment. This dissociation relates to the discomfort the user experiences with real world interaction. Our team has introduced a novel way of receiving synthesized feedback from the virtual environment through the use of a haptic wheelchair.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic dance generation system considering sign language information 考虑手语信息的自动舞蹈生成系统
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945101
Wakana Asahina, Naoya Iwamoto, Hubert P. H. Shum, S. Morishima
In recent years, thanks to the development of 3DCG animation editing tools (e.g. MikuMikuDance), a lot of 3D character dance animation movies are created by amateur users. However it is very difficult to create choreography from scratch without any technical knowledge. Shiratori et al. [2006] produced the dance automatic generation system considering rhythm and intensity of dance motions. However each segment is selected randomly from database, so the generated dance motion has no linguistic or emotional meanings. Takano et al. [2010] produced a human motion generation system considering motion labels. However they use simple motion labels like "running" or "jump", so they cannot generate motions that express emotions. In reality, professional dancers make choreography based on music features or lyrics in music, and express emotion or how they feel in music. In our work, we aim at generating more emotional dance motion easily. Therefore, we use linguistic information in lyrics, and generate dance motion.
近年来,由于3DCG动画编辑工具(如MikuMikuDance)的发展,很多3D角色舞蹈动画电影都是由业余用户创作的。然而,在没有任何技术知识的情况下,从头开始创建编排是非常困难的。Shiratori等人[2006]提出了考虑舞蹈动作节奏和强度的舞蹈自动生成系统。然而,每个片段都是从数据库中随机选择的,因此生成的舞蹈动作没有语言或情感意义。Takano等人[2010]制作了一个考虑运动标签的人体运动生成系统。然而,他们使用简单的动作标签,如“跑”或“跳”,所以他们不能产生表达情感的动作。在现实中,专业舞者根据音乐的特点或歌词进行编舞,并通过音乐表达情感或感受。在我们的工作中,我们的目标是更容易地产生更多的情感舞蹈动作。因此,我们在歌词中使用语言信息,并产生舞蹈动作。
{"title":"Automatic dance generation system considering sign language information","authors":"Wakana Asahina, Naoya Iwamoto, Hubert P. H. Shum, S. Morishima","doi":"10.1145/2945078.2945101","DOIUrl":"https://doi.org/10.1145/2945078.2945101","url":null,"abstract":"In recent years, thanks to the development of 3DCG animation editing tools (e.g. MikuMikuDance), a lot of 3D character dance animation movies are created by amateur users. However it is very difficult to create choreography from scratch without any technical knowledge. Shiratori et al. [2006] produced the dance automatic generation system considering rhythm and intensity of dance motions. However each segment is selected randomly from database, so the generated dance motion has no linguistic or emotional meanings. Takano et al. [2010] produced a human motion generation system considering motion labels. However they use simple motion labels like \"running\" or \"jump\", so they cannot generate motions that express emotions. In reality, professional dancers make choreography based on music features or lyrics in music, and express emotion or how they feel in music. In our work, we aim at generating more emotional dance motion easily. Therefore, we use linguistic information in lyrics, and generate dance motion.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134630205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Motion compensated automatic image compositing for GoPro videos 运动补偿自动图像合成的GoPro视频
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945090
Ryan Lustig, Balu Adsumilli, David Newman
Image composition for GoPro videos captured in the presence of significant camera motion is a manual and time consuming process. Existing techniques typically fail to automate this process due to the wide-capture field of view and high camera motion of such videos. The proposed method seeks to solve these problems by developing an image registration algorithm for fisheye images without expensive pixel warping or loss of field of view. Background subtraction is performed to extract moving foreground objects, which are noise corrected and then layered on a reference image to build the final composite. The results show marked improvements in accuracy and efficiency for automating image composition.
在存在显著相机运动的情况下拍摄GoPro视频的图像构图是一个手动且耗时的过程。现有的技术通常无法实现这一过程的自动化,因为这类视频的捕获范围很广,摄像机的运动也很高。该方法旨在通过开发一种无昂贵的像素扭曲或视场损失的鱼眼图像配准算法来解决这些问题。进行背景减法以提取移动的前景物体,然后对其进行噪声校正,然后在参考图像上分层以构建最终的复合图像。结果表明,自动合成图像的精度和效率有了显著提高。
{"title":"Motion compensated automatic image compositing for GoPro videos","authors":"Ryan Lustig, Balu Adsumilli, David Newman","doi":"10.1145/2945078.2945090","DOIUrl":"https://doi.org/10.1145/2945078.2945090","url":null,"abstract":"Image composition for GoPro videos captured in the presence of significant camera motion is a manual and time consuming process. Existing techniques typically fail to automate this process due to the wide-capture field of view and high camera motion of such videos. The proposed method seeks to solve these problems by developing an image registration algorithm for fisheye images without expensive pixel warping or loss of field of view. Background subtraction is performed to extract moving foreground objects, which are noise corrected and then layered on a reference image to build the final composite. The results show marked improvements in accuracy and efficiency for automating image composition.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121460091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized mobile rendering techniques based on local cubemaps 基于局部立方体地图的优化移动渲染技术
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945113
Roberto Lopez Mendez, Sylwester Bala
Local cubemaps (LC) were introduced for the first time more than ten years ago for rendering reflections [Bjorke 2004]. Nevertheless it is only in recent years that major game engines have incorporated this technique. In this paper we introduce a generalized concept of LC and present two new LC applications for rendering shadows and refractions. We show that limitations associated with the static nature of LC can be overcome by combining this technique with other well-known runtime techniques for reflections and shadows. Rendering techniques based on LC allow high quality shadows, reflections and refractions to be rendered very efficiently which makes them ideally suited to mobile devices where runtime resources must be carefully balanced [Ice Cave Demo 2015].
局部立方体地图(LC)在十多年前首次被引入,用于渲染反射[Bjorke 2004]。然而,直到最近几年,主流游戏引擎才开始采用这种技术。本文介绍了LC的广义概念,并介绍了两种新的LC应用于绘制阴影和折射。我们表明,与LC静态特性相关的限制可以通过将此技术与其他著名的反射和阴影运行时技术相结合来克服。基于LC的渲染技术可以非常有效地渲染高质量的阴影、反射和折射,这使得它们非常适合运行时资源必须仔细平衡的移动设备[冰洞演示2015]。
{"title":"Optimized mobile rendering techniques based on local cubemaps","authors":"Roberto Lopez Mendez, Sylwester Bala","doi":"10.1145/2945078.2945113","DOIUrl":"https://doi.org/10.1145/2945078.2945113","url":null,"abstract":"Local cubemaps (LC) were introduced for the first time more than ten years ago for rendering reflections [Bjorke 2004]. Nevertheless it is only in recent years that major game engines have incorporated this technique. In this paper we introduce a generalized concept of LC and present two new LC applications for rendering shadows and refractions. We show that limitations associated with the static nature of LC can be overcome by combining this technique with other well-known runtime techniques for reflections and shadows. Rendering techniques based on LC allow high quality shadows, reflections and refractions to be rendered very efficiently which makes them ideally suited to mobile devices where runtime resources must be carefully balanced [Ice Cave Demo 2015].","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115798651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel 3D printing based on skeletal remeshing 基于骨骼重网格的并行3D打印
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945126
Kuo-Wei Chen, Chih-Yuan Yao, You-En Lin, Yu-Chi Lai
Although 3D printing is becoming more popular, but there are two major problem. The first is the slowness of the process because of requirement of processing information of an extra axis comparing to tradition 2D printers. The second is the printable dimension of 3D printers. Generally, the larger the model is printed, the larger a 3D printer has to be and the more expensive it is. Furthermore, it would also require a large amount of extra inflation materials. With the entrance of cheap 3D printers, such as OLO 3D printers [Inc. 2016], parallel printing with multiple cheap printers can possibly be the solution. In order to parallel print a 3D model, we must decompose a 3D model into smaller components. After printing out all the components, we assemble them together by attaching them to the skeleton through supporters and joints to form the final result. As shown in our results, our designed shell-and-bone-based model printing can not only save the printing time but also use lesser material than the original whole model printing.
虽然3D打印越来越受欢迎,但存在两个主要问题。首先,与传统的2D打印机相比,由于需要多处理一个轴的信息,因此加工速度较慢。二是3D打印机的可打印尺寸。一般来说,打印的模型越大,3D打印机就越大,价格也越贵。此外,它还需要大量额外的膨胀材料。随着廉价3D打印机的进入,如OLO 3D打印机[Inc. 2016],使用多台廉价打印机并行打印可能是解决方案。为了并行打印3D模型,我们必须将3D模型分解成更小的组件。在打印出所有部件后,我们通过支架和关节将它们连接到骨架上,从而形成最终的结果。结果表明,我们设计的基于壳骨的模型打印不仅节省了打印时间,而且比原来的整体模型打印使用的材料更少。
{"title":"Parallel 3D printing based on skeletal remeshing","authors":"Kuo-Wei Chen, Chih-Yuan Yao, You-En Lin, Yu-Chi Lai","doi":"10.1145/2945078.2945126","DOIUrl":"https://doi.org/10.1145/2945078.2945126","url":null,"abstract":"Although 3D printing is becoming more popular, but there are two major problem. The first is the slowness of the process because of requirement of processing information of an extra axis comparing to tradition 2D printers. The second is the printable dimension of 3D printers. Generally, the larger the model is printed, the larger a 3D printer has to be and the more expensive it is. Furthermore, it would also require a large amount of extra inflation materials. With the entrance of cheap 3D printers, such as OLO 3D printers [Inc. 2016], parallel printing with multiple cheap printers can possibly be the solution. In order to parallel print a 3D model, we must decompose a 3D model into smaller components. After printing out all the components, we assemble them together by attaching them to the skeleton through supporters and joints to form the final result. As shown in our results, our designed shell-and-bone-based model printing can not only save the printing time but also use lesser material than the original whole model printing.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123578323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coded skeleton: programmable bodies for shape changing user interfaces 编码骨架:用于改变形状的用户界面的可编程主体
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945096
Miyu Iwafune, T. Ohshima, Yoichi Ochiai
We propose novel design method to fabricate user interfaces with mechanical metamaterial called Coded Skeleton. The Coded Skeleton is combination of shape memory alloys (SMA) and 3-D printed bodies, and it has computationally designed structure that is flexible in one deformation mode but is stiff in the other modes. This property helps to realize materials that automatically deform by a small and lightweight actuator such as SMA. Also it enables to sense user inputs with the resistance value of SMA. In this paper, we propose shape-changing user interfaces by integrating sensors and actuators as Coded Skeleton. The deformation and stiffness of this structure is computationally designed and also controllable. Further, we propose interactions and applications with user interfaces fabricated using our design method.
提出了一种利用机械超材料编码骨架制造用户界面的新设计方法。编码骨架是形状记忆合金(SMA)和3d打印体的结合,它具有计算设计的结构,在一种变形模式下是柔性的,但在其他模式下是刚性的。这一特性有助于实现材料通过一个小而轻的驱动器(如SMA)自动变形。它还可以用SMA的电阻值感知用户输入。在本文中,我们提出了通过集成传感器和执行器作为编码骨架的形状变化的用户界面。该结构的变形和刚度是计算设计的,并且是可控的。此外,我们提出了使用我们的设计方法制作用户界面的交互和应用程序。
{"title":"Coded skeleton: programmable bodies for shape changing user interfaces","authors":"Miyu Iwafune, T. Ohshima, Yoichi Ochiai","doi":"10.1145/2945078.2945096","DOIUrl":"https://doi.org/10.1145/2945078.2945096","url":null,"abstract":"We propose novel design method to fabricate user interfaces with mechanical metamaterial called Coded Skeleton. The Coded Skeleton is combination of shape memory alloys (SMA) and 3-D printed bodies, and it has computationally designed structure that is flexible in one deformation mode but is stiff in the other modes. This property helps to realize materials that automatically deform by a small and lightweight actuator such as SMA. Also it enables to sense user inputs with the resistance value of SMA. In this paper, we propose shape-changing user interfaces by integrating sensors and actuators as Coded Skeleton. The deformation and stiffness of this structure is computationally designed and also controllable. Further, we propose interactions and applications with user interfaces fabricated using our design method.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124973613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A tabletop stereoscopic 3DCG system with motion parallax for two users 一个桌面立体3DCG系统与运动视差为两个用户
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945152
S. Mizuno
In this paper, I improve a tabletop stereoscopic 3DCG system with motion parallax so as to use it with two users and share a stereoscopic 3DCG scene together. I develop a method to calculate two users' viewpoints simultaneously by using depth images. I use a 3D-enabled projector to superimpose two 3DCG images for each user, and use active shutter glasses to separate them into individual images for each user. The improved system would be useable for cooperative works and match type games.
在本文中,我改进了一个带有运动视差的桌面立体3DCG系统,使其能够与两个用户一起使用,共同共享一个立体3DCG场景。我开发了一种利用深度图像同时计算两个用户视点的方法。我使用3d投影仪为每个用户叠加两张3DCG图像,并使用主动快门眼镜将它们分离成每个用户的单独图像。改进后的系统将适用于合作作品和比赛类型的游戏。
{"title":"A tabletop stereoscopic 3DCG system with motion parallax for two users","authors":"S. Mizuno","doi":"10.1145/2945078.2945152","DOIUrl":"https://doi.org/10.1145/2945078.2945152","url":null,"abstract":"In this paper, I improve a tabletop stereoscopic 3DCG system with motion parallax so as to use it with two users and share a stereoscopic 3DCG scene together. I develop a method to calculate two users' viewpoints simultaneously by using depth images. I use a 3D-enabled projector to superimpose two 3DCG images for each user, and use active shutter glasses to separate them into individual images for each user. The improved system would be useable for cooperative works and match type games.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126795297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Light field completion using focal stack propagation 利用焦叠传播完成光场
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945132
Terence Broad, M. Grierson
Both light field photography and focal stack photography are rapidly becoming more accessible with Lytro's commercial light field cameras and the ever increasing processing power of mobile devices. Light field photography offers the ability of post capturing perspective changes and digital refocusing, but little is available in the way of post-production editing of light field images. We present a first approach for interactive content aware completion of light fields and focal stacks, allowing for the removal of foreground or background elements from a scene.
随着Lytro的商用光场相机和移动设备不断增强的处理能力,光场摄影和焦叠摄影正迅速变得越来越容易。光场摄影提供了后期捕捉视角变化和数字重新对焦的能力,但在光场图像的后期制作编辑方面却很少。我们提出了光场和焦点堆栈的交互式内容感知完成的第一种方法,允许从场景中删除前景或背景元素。
{"title":"Light field completion using focal stack propagation","authors":"Terence Broad, M. Grierson","doi":"10.1145/2945078.2945132","DOIUrl":"https://doi.org/10.1145/2945078.2945132","url":null,"abstract":"Both light field photography and focal stack photography are rapidly becoming more accessible with Lytro's commercial light field cameras and the ever increasing processing power of mobile devices. Light field photography offers the ability of post capturing perspective changes and digital refocusing, but little is available in the way of post-production editing of light field images. We present a first approach for interactive content aware completion of light fields and focal stacks, allowing for the removal of foreground or background elements from a scene.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125967646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
ACM SIGGRAPH 2016 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1