首页 > 最新文献

ACM SIGGRAPH 2016 Posters最新文献

英文 中文
Bionic scope: wearable system for visual extension triggered by bioelectrical signal 仿生镜:由生物电信号触发的可穿戴视觉延伸系统
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945119
Shota Ekuni, Koichi Murata, Yasunari Asakura, Akira Uehara
Visual extension has been an essential issue because the visual information accounts for a large part of sensory information which human processes. There are some instruments which are used to watch distant, objects or people, such as a monocle, a binocular, and a telescope. When we use these instruments, we firstly take a general view without them and adjust magnification and focus of them. These operations are complicated and occupy the user's hands. Therefore, a visual extension device that is capable of being used easily without hands is extremely useful. A system developed in the previous work recognizes the movement of the user's eyelid and operating devices by using it [Hideaki et al. 2013]. However, a camera is placed in front of the eye, and that obstructs the field of view. In addition, image recognition needs much calculation cost and it is difficult to be processed in a small computer. When human intends to move his/her muscles, bioelectrical signal (BES) leaks out on the surface of skin. The BES can be measured by small and thin electrodes attached to the surface of the skin. By using the BES, user's operational intentions can be detected promptly without obstructing the user's field of view. Moreover, using BES sensors can reduce electrical power, and contribute to downsizing systems.
由于视觉信息在人类处理的感觉信息中占了很大一部分,因此视觉扩展一直是一个重要的问题。有一些仪器是用来观察远处的物体或人的,比如单片眼镜、双筒望远镜和望远镜。当我们使用这些仪器时,我们首先要在没有仪器的情况下看个大概,然后调整放大率和焦距。这些操作很复杂,占用用户的双手。因此,一种无需双手即可轻松使用的视觉延伸装置是非常有用的。在之前的工作中开发的一个系统可以识别用户眼睑的运动并使用它来操作设备[Hideaki et al. 2013]。然而,相机被放置在眼睛的前面,这阻碍了视野。此外,图像识别需要大量的计算成本,并且难以在小型计算机上进行处理。当人体想要运动肌肉时,生物电信号(BES)会泄漏到皮肤表面。BES可以通过附着在皮肤表面的小而薄的电极来测量。通过使用BES,可以在不妨碍用户视野的情况下迅速检测到用户的操作意图。此外,使用BES传感器可以减少电力,并有助于缩小系统的尺寸。
{"title":"Bionic scope: wearable system for visual extension triggered by bioelectrical signal","authors":"Shota Ekuni, Koichi Murata, Yasunari Asakura, Akira Uehara","doi":"10.1145/2945078.2945119","DOIUrl":"https://doi.org/10.1145/2945078.2945119","url":null,"abstract":"Visual extension has been an essential issue because the visual information accounts for a large part of sensory information which human processes. There are some instruments which are used to watch distant, objects or people, such as a monocle, a binocular, and a telescope. When we use these instruments, we firstly take a general view without them and adjust magnification and focus of them. These operations are complicated and occupy the user's hands. Therefore, a visual extension device that is capable of being used easily without hands is extremely useful. A system developed in the previous work recognizes the movement of the user's eyelid and operating devices by using it [Hideaki et al. 2013]. However, a camera is placed in front of the eye, and that obstructs the field of view. In addition, image recognition needs much calculation cost and it is difficult to be processed in a small computer. When human intends to move his/her muscles, bioelectrical signal (BES) leaks out on the surface of skin. The BES can be measured by small and thin electrodes attached to the surface of the skin. By using the BES, user's operational intentions can be detected promptly without obstructing the user's field of view. Moreover, using BES sensors can reduce electrical power, and contribute to downsizing systems.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125572595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Panorama image interpolation for real-time walkthrough 全景图像插值实时演练
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945111
N. Kawai, Cédric Audras, Sou Tabata, Takahiro Matsubara
We propose a method to generate new views of a scene by capturing a few panorama images in real space and interpolating captured images. We describe a procedure for interpolating panoramas captured at four corners of a rectangle area without geometry, and present experimental results including walkthrough in real time. Our image-based method enables walking through space much more easily than using 3D modeling and rendering.
我们提出了一种通过在真实空间中捕获一些全景图像并对捕获的图像进行插值来生成场景新视图的方法。我们描述了一个在没有几何图形的矩形区域的四个角捕获的全景插值过程,并给出了包括实时演练的实验结果。我们基于图像的方法可以比使用3D建模和渲染更容易地在空间中行走。
{"title":"Panorama image interpolation for real-time walkthrough","authors":"N. Kawai, Cédric Audras, Sou Tabata, Takahiro Matsubara","doi":"10.1145/2945078.2945111","DOIUrl":"https://doi.org/10.1145/2945078.2945111","url":null,"abstract":"We propose a method to generate new views of a scene by capturing a few panorama images in real space and interpolating captured images. We describe a procedure for interpolating panoramas captured at four corners of a rectangle area without geometry, and present experimental results including walkthrough in real time. Our image-based method enables walking through space much more easily than using 3D modeling and rendering.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125573046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Layered telepresence: simultaneous multi presence experience using eye gaze based perceptual awareness blending 分层远程呈现:使用基于眼睛注视的感知意识混合的同时多重呈现体验
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945098
M. Y. Saraiji, Shota Sugimoto, C. Fernando, K. Minamizawa, S. Tachi
We propose "Layered Telepresence", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.
我们提出了“分层网真”,一种体验同时多在场的新方法。用户的目光和感知意识与从多个远程呈现机器人接收的实时视听信息相结合。该系统将多个机器人接收到的视听信息排列成优先级驱动的分层堆栈。利用图像处理技术,基于每一层识别的对象创建加权特征图,并将用户凝视周围加权最大的层推到前景。所有其他图层都被推到背景中,提供人工景深效果。所提出的方法不仅适用于机器人,而且每一层都可以代表任何视听内容,比如视频透明的头戴式显示器、电视屏幕,甚至是你的PC屏幕,从而实现真正的多任务处理。
{"title":"Layered telepresence: simultaneous multi presence experience using eye gaze based perceptual awareness blending","authors":"M. Y. Saraiji, Shota Sugimoto, C. Fernando, K. Minamizawa, S. Tachi","doi":"10.1145/2945078.2945098","DOIUrl":"https://doi.org/10.1145/2945078.2945098","url":null,"abstract":"We propose \"Layered Telepresence\", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121027828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ThirdEye: a coaxial feature tracking system for stereoscopic video see-through augmented reality ThirdEye:用于立体视频透视增强现实的同轴特征跟踪系统
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945100
Yu-Xiang Wang, Yu-Ju Tsai, Yu-Hsuan Huang, Wan-ling Yang, Tzu-Chieh Yu, Yu-Kai Chiu, M. Ouhyoung
For stereoscopic augmented reality (AR) system, continuous feature tracking of the observing target is required to generate a virtual object in the real world coordinate. Besides, dual cameras have to be placed with proper distance to obtain correct stereo images for video see-through applications. Both higher resolution and frame rate per second (FPS) can improve the user experience. However, feature tracking could be the bottleneck with high resolution images and the latency would increase if image processing was done before tracking.
在立体增强现实(AR)系统中,需要对观测目标进行连续的特征跟踪,从而在现实世界坐标中生成虚拟物体。此外,在视频透视应用中,双摄像头必须放置在适当的距离才能获得正确的立体图像。更高的分辨率和帧率(FPS)都可以改善用户体验。然而,特征跟踪可能是高分辨率图像的瓶颈,如果在跟踪之前进行图像处理,则延迟会增加。
{"title":"ThirdEye: a coaxial feature tracking system for stereoscopic video see-through augmented reality","authors":"Yu-Xiang Wang, Yu-Ju Tsai, Yu-Hsuan Huang, Wan-ling Yang, Tzu-Chieh Yu, Yu-Kai Chiu, M. Ouhyoung","doi":"10.1145/2945078.2945100","DOIUrl":"https://doi.org/10.1145/2945078.2945100","url":null,"abstract":"For stereoscopic augmented reality (AR) system, continuous feature tracking of the observing target is required to generate a virtual object in the real world coordinate. Besides, dual cameras have to be placed with proper distance to obtain correct stereo images for video see-through applications. Both higher resolution and frame rate per second (FPS) can improve the user experience. However, feature tracking could be the bottleneck with high resolution images and the latency would increase if image processing was done before tracking.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133554085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Large-scale rapid-prototyping with zometool 使用zometool进行大规模快速原型制作
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945155
Chun-Kai Huang, Tsung-Hung Wu, Yi-Ling Chen, Bing-Yu Chen
In recent years, personalized fabrication has attracted much attention due to the greatly improved accessibility of consumer-level 3D printers. However, 3D printers still suffer from the relatively long production time and limited output size, which are undesirable for large-scale rapid-prototyping. Zometool, which is a popular building block system widely used for education and entertainment, is potentially suitable for providing an alternative solution to the aforementioned scenarios. However, even for 3D models of moderate complexity, novice users may still have difficulty in building visually plausible results by themselves. Therefore, the goal of this work is to develop an automatic system to assist users to realize Zometool rapid prototyping with a specified 3D shape. Compared with the previous work [Zimmer and Kobbelt 2014], our method may achieve the ease of assembly and economic usage of building units since we focus on generating the Zometool structures through a higher level of shape abstraction.
近年来,由于消费级3D打印机的可访问性大大提高,个性化制造引起了人们的广泛关注。然而,3D打印机仍然存在生产时间较长和输出尺寸有限的问题,这对于大规模快速成型来说是不可取的。Zometool是一种广泛用于教育和娱乐的流行构建块系统,它可能适合为上述场景提供替代解决方案。然而,即使对于中等复杂性的3D模型,新手用户可能仍然难以自己构建视觉上可信的结果。因此,本工作的目标是开发一个自动化系统,以帮助用户实现具有指定3D形状的Zometool快速成型。与之前的工作相比[Zimmer and Kobbelt 2014],我们的方法可以实现建筑单元的易于组装和经济使用,因为我们专注于通过更高层次的形状抽象来生成Zometool结构。
{"title":"Large-scale rapid-prototyping with zometool","authors":"Chun-Kai Huang, Tsung-Hung Wu, Yi-Ling Chen, Bing-Yu Chen","doi":"10.1145/2945078.2945155","DOIUrl":"https://doi.org/10.1145/2945078.2945155","url":null,"abstract":"In recent years, personalized fabrication has attracted much attention due to the greatly improved accessibility of consumer-level 3D printers. However, 3D printers still suffer from the relatively long production time and limited output size, which are undesirable for large-scale rapid-prototyping. Zometool, which is a popular building block system widely used for education and entertainment, is potentially suitable for providing an alternative solution to the aforementioned scenarios. However, even for 3D models of moderate complexity, novice users may still have difficulty in building visually plausible results by themselves. Therefore, the goal of this work is to develop an automatic system to assist users to realize Zometool rapid prototyping with a specified 3D shape. Compared with the previous work [Zimmer and Kobbelt 2014], our method may achieve the ease of assembly and economic usage of building units since we focus on generating the Zometool structures through a higher level of shape abstraction.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130323046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interaction with virtual shadow through real shadow using two projectors 使用两个投影仪通过真实阴影与虚拟阴影进行交互
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945121
Hiroko Iwasaki, Momoko Kondo, Rei Ito, Saya Sugiura, Yuka Oba, S. Mizuno
In this paper, we propose a method to interact with virtual shadows through real shadows various physical objects by using two projectors. In our method, the system scans physical objects in front of a projector, generates virtual shadows with CG according to the scan data, and superimposes the virtual shadows to real shadows of the physical objects with the projector. Another projector is used to superimpose virtual light sources inside real shadows. Our method enables us to experience novel interaction with various shadows such as shadows of flower arrangements.
在本文中,我们提出了一种使用两个投影仪通过真实阴影各种物理对象与虚拟阴影交互的方法。在我们的方法中,系统对投影仪前的物理对象进行扫描,根据扫描数据用CG生成虚拟阴影,并用投影仪将虚拟阴影叠加到物理对象的真实阴影上。另一个投影仪用于在真实阴影中叠加虚拟光源。我们的方法使我们能够体验到与各种阴影的新颖互动,例如插花的阴影。
{"title":"Interaction with virtual shadow through real shadow using two projectors","authors":"Hiroko Iwasaki, Momoko Kondo, Rei Ito, Saya Sugiura, Yuka Oba, S. Mizuno","doi":"10.1145/2945078.2945121","DOIUrl":"https://doi.org/10.1145/2945078.2945121","url":null,"abstract":"In this paper, we propose a method to interact with virtual shadows through real shadows various physical objects by using two projectors. In our method, the system scans physical objects in front of a projector, generates virtual shadows with CG according to the scan data, and superimposes the virtual shadows to real shadows of the physical objects with the projector. Another projector is used to superimpose virtual light sources inside real shadows. Our method enables us to experience novel interaction with various shadows such as shadows of flower arrangements.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"01 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130435888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
OpenEXR/Id isolate any object with a perfect antialiasing OpenEXR/Id隔离任何对象与一个完美的抗锯齿
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945136
Cyril Corvazier, B. Legros, Rachid Chikh
We present a new storage scheme for computer graphic images based on OpenEXR 2. Using such EXR/Id files, the compositing artist can isolate an object selection (by picking them or using a regular expression to match their names) and color corrects them with no edge artefact, which was not possible to achieve without rendering the object selection on its own layer. Using this file format avoids going back and forth between the rendering and the compositing departments because no mask image or layering are needed anymore. The technique is demonstrated in an open source software suite, including a library to read and write the EXR/Id files and an OpenFX plug-in which generates the images in any compositing software.
提出了一种基于OpenEXR 2的计算机图形图像存储新方案。使用这样的EXR/Id文件,合成艺术家可以隔离对象选择(通过选择它们或使用正则表达式来匹配它们的名称),并在没有边缘伪影的情况下对它们进行颜色校正,如果不在自己的图层上渲染对象选择,这是不可能实现的。使用这种文件格式可以避免在渲染和合成部门之间来回切换,因为不再需要蒙版图像或分层。该技术在一个开源软件套件中进行了演示,其中包括一个读取和写入EXR/Id文件的库和一个在任何合成软件中生成图像的OpenFX插件。
{"title":"OpenEXR/Id isolate any object with a perfect antialiasing","authors":"Cyril Corvazier, B. Legros, Rachid Chikh","doi":"10.1145/2945078.2945136","DOIUrl":"https://doi.org/10.1145/2945078.2945136","url":null,"abstract":"We present a new storage scheme for computer graphic images based on OpenEXR 2. Using such EXR/Id files, the compositing artist can isolate an object selection (by picking them or using a regular expression to match their names) and color corrects them with no edge artefact, which was not possible to achieve without rendering the object selection on its own layer. Using this file format avoids going back and forth between the rendering and the compositing departments because no mask image or layering are needed anymore. The technique is demonstrated in an open source software suite, including a library to read and write the EXR/Id files and an OpenFX plug-in which generates the images in any compositing software.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116770802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Living the past: the use of VR to provide a historical experience 活在过去:使用VR提供历史体验
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945169
Pedro Rossa, Nicolas Hoffman, João Ricardo Bittencourt, Fernando P. Marson, V. Cassol
In this work we explore the use of games and VR in order to collaborate with History teaching in Brazil. We develop a game and a VR experience based on local technology. In or approach the player is considered as an Indian who lived in the Jesuitical Reductions in the South of Brazil and was requested to practice bow and arrow shooting.
在这项工作中,我们探索了游戏和VR的使用,以便与巴西的历史教学合作。我们开发了一款基于本地技术的游戏和VR体验。在游戏中,玩家被认为是住在巴西南部jesutical辖区的印第安人,被要求练习弓箭射击。
{"title":"Living the past: the use of VR to provide a historical experience","authors":"Pedro Rossa, Nicolas Hoffman, João Ricardo Bittencourt, Fernando P. Marson, V. Cassol","doi":"10.1145/2945078.2945169","DOIUrl":"https://doi.org/10.1145/2945078.2945169","url":null,"abstract":"In this work we explore the use of games and VR in order to collaborate with History teaching in Brazil. We develop a game and a VR experience based on local technology. In or approach the player is considered as an Indian who lived in the Jesuitical Reductions in the South of Brazil and was requested to practice bow and arrow shooting.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122405936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Charcoal rendering and shading with reflections 木炭渲染和阴影与反射
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945110
Yuxiao Du, E. Akleman
In this work, we have developed an approach to include global illumination effects into charcoal drawing (see Figure 1). Our charcoal shader provides a robust computation to obtain charcoal effect for a wide variety of diffuse and specular materials. Our contributions can be summarized as follows: (1) A Barrycentric shader that is based on degree zero B-spline basis functions; (2) A set of hand-drawn charcoal control texture images that naturally provide desired charcoal look-and-feel; and (3) A painter's hierarchy for handling a high number of shading parameters consistent with charcoal drawing.
在这项工作中,我们开发了一种方法,将全局照明效果包括到木炭绘图中(见图1)。我们的木炭着色器提供了一个强大的计算,可以为各种漫反射和高光材料获得木炭效果。我们的贡献可以总结如下:(1)基于0度b样条基函数的Barrycentric shader;(2)一套手绘木炭控制纹理图像,自然提供所需的木炭外观和感觉;(3)画家处理与炭画一致的大量阴影参数的层次结构。
{"title":"Charcoal rendering and shading with reflections","authors":"Yuxiao Du, E. Akleman","doi":"10.1145/2945078.2945110","DOIUrl":"https://doi.org/10.1145/2945078.2945110","url":null,"abstract":"In this work, we have developed an approach to include global illumination effects into charcoal drawing (see Figure 1). Our charcoal shader provides a robust computation to obtain charcoal effect for a wide variety of diffuse and specular materials. Our contributions can be summarized as follows: (1) A Barrycentric shader that is based on degree zero B-spline basis functions; (2) A set of hand-drawn charcoal control texture images that naturally provide desired charcoal look-and-feel; and (3) A painter's hierarchy for handling a high number of shading parameters consistent with charcoal drawing.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127848567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Pseudo-softness evaluation in grasping a virtual object with a bare hand 徒手抓取虚拟物体时的伪柔软度评价
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945118
Mie Sato, Sota Suzuki, Daiki Ebihara, Sho Kato, Sato Ishigaki
Bare hand interaction with a virtual object reduces uncomfortableness with devices mounted on a user's hand. There are some studies on the bare hand interaction[Benko et al. 2012], however a virtual object is supposed to be a hard object or a user touches a physical object during the bare hand interaction. We focus on grasping a virtual object without using any physical object. Grasping is one of the basic movements in manipulating an object and is more difficult than simple movements like touching an object. Because of the bare hand interaction with no physical object, there is no haptic device on a user's hand and so there is no physical feedback to the user. Our challenge is to provide a user with pseudo-softness while grasping a virtual object with a bare hand. We have been developing an AR system that makes it possible for a user to grasp a virtual object with a bare hand[Suzuki et al. 2014]. Using this AR system, we propose visual stimuli that correspond with the user's hand movements, to manipulate the pseudo-softness of a virtual object. Evaluation results show that with the visual stimuli a user feels pseudo-softness while grasping a virtual object with a bare hand.
徒手与虚拟物体进行交互,可以减少安装在用户手上的设备的不舒适感。有一些关于徒手交互的研究[Benko et al. 2012],然而虚拟对象应该是一个坚硬的物体,或者用户在徒手交互过程中触摸一个物理对象。我们专注于在不使用任何物理对象的情况下抓取虚拟对象。抓握是操纵物体的基本动作之一,比触摸物体等简单动作要困难得多。因为徒手交互没有物理对象,用户手上没有触觉设备,所以没有物理反馈给用户。我们的挑战是在徒手抓取虚拟物体时为用户提供伪柔软度。我们一直在开发一种增强现实系统,使用户可以徒手抓住虚拟物体[Suzuki et al. 2014]。使用这个增强现实系统,我们提出了与用户的手部运动相对应的视觉刺激,以操纵虚拟物体的伪柔软度。评价结果表明,在视觉刺激下,用户在徒手抓取虚拟物体时产生了伪柔软感。
{"title":"Pseudo-softness evaluation in grasping a virtual object with a bare hand","authors":"Mie Sato, Sota Suzuki, Daiki Ebihara, Sho Kato, Sato Ishigaki","doi":"10.1145/2945078.2945118","DOIUrl":"https://doi.org/10.1145/2945078.2945118","url":null,"abstract":"Bare hand interaction with a virtual object reduces uncomfortableness with devices mounted on a user's hand. There are some studies on the bare hand interaction[Benko et al. 2012], however a virtual object is supposed to be a hard object or a user touches a physical object during the bare hand interaction. We focus on grasping a virtual object without using any physical object. Grasping is one of the basic movements in manipulating an object and is more difficult than simple movements like touching an object. Because of the bare hand interaction with no physical object, there is no haptic device on a user's hand and so there is no physical feedback to the user. Our challenge is to provide a user with pseudo-softness while grasping a virtual object with a bare hand. We have been developing an AR system that makes it possible for a user to grasp a virtual object with a bare hand[Suzuki et al. 2014]. Using this AR system, we propose visual stimuli that correspond with the user's hand movements, to manipulate the pseudo-softness of a virtual object. Evaluation results show that with the visual stimuli a user feels pseudo-softness while grasping a virtual object with a bare hand.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115778218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
ACM SIGGRAPH 2016 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1