Visual extension has been an essential issue because the visual information accounts for a large part of sensory information which human processes. There are some instruments which are used to watch distant, objects or people, such as a monocle, a binocular, and a telescope. When we use these instruments, we firstly take a general view without them and adjust magnification and focus of them. These operations are complicated and occupy the user's hands. Therefore, a visual extension device that is capable of being used easily without hands is extremely useful. A system developed in the previous work recognizes the movement of the user's eyelid and operating devices by using it [Hideaki et al. 2013]. However, a camera is placed in front of the eye, and that obstructs the field of view. In addition, image recognition needs much calculation cost and it is difficult to be processed in a small computer. When human intends to move his/her muscles, bioelectrical signal (BES) leaks out on the surface of skin. The BES can be measured by small and thin electrodes attached to the surface of the skin. By using the BES, user's operational intentions can be detected promptly without obstructing the user's field of view. Moreover, using BES sensors can reduce electrical power, and contribute to downsizing systems.
由于视觉信息在人类处理的感觉信息中占了很大一部分,因此视觉扩展一直是一个重要的问题。有一些仪器是用来观察远处的物体或人的,比如单片眼镜、双筒望远镜和望远镜。当我们使用这些仪器时,我们首先要在没有仪器的情况下看个大概,然后调整放大率和焦距。这些操作很复杂,占用用户的双手。因此,一种无需双手即可轻松使用的视觉延伸装置是非常有用的。在之前的工作中开发的一个系统可以识别用户眼睑的运动并使用它来操作设备[Hideaki et al. 2013]。然而,相机被放置在眼睛的前面,这阻碍了视野。此外,图像识别需要大量的计算成本,并且难以在小型计算机上进行处理。当人体想要运动肌肉时,生物电信号(BES)会泄漏到皮肤表面。BES可以通过附着在皮肤表面的小而薄的电极来测量。通过使用BES,可以在不妨碍用户视野的情况下迅速检测到用户的操作意图。此外,使用BES传感器可以减少电力,并有助于缩小系统的尺寸。
{"title":"Bionic scope: wearable system for visual extension triggered by bioelectrical signal","authors":"Shota Ekuni, Koichi Murata, Yasunari Asakura, Akira Uehara","doi":"10.1145/2945078.2945119","DOIUrl":"https://doi.org/10.1145/2945078.2945119","url":null,"abstract":"Visual extension has been an essential issue because the visual information accounts for a large part of sensory information which human processes. There are some instruments which are used to watch distant, objects or people, such as a monocle, a binocular, and a telescope. When we use these instruments, we firstly take a general view without them and adjust magnification and focus of them. These operations are complicated and occupy the user's hands. Therefore, a visual extension device that is capable of being used easily without hands is extremely useful. A system developed in the previous work recognizes the movement of the user's eyelid and operating devices by using it [Hideaki et al. 2013]. However, a camera is placed in front of the eye, and that obstructs the field of view. In addition, image recognition needs much calculation cost and it is difficult to be processed in a small computer. When human intends to move his/her muscles, bioelectrical signal (BES) leaks out on the surface of skin. The BES can be measured by small and thin electrodes attached to the surface of the skin. By using the BES, user's operational intentions can be detected promptly without obstructing the user's field of view. Moreover, using BES sensors can reduce electrical power, and contribute to downsizing systems.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125572595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Kawai, Cédric Audras, Sou Tabata, Takahiro Matsubara
We propose a method to generate new views of a scene by capturing a few panorama images in real space and interpolating captured images. We describe a procedure for interpolating panoramas captured at four corners of a rectangle area without geometry, and present experimental results including walkthrough in real time. Our image-based method enables walking through space much more easily than using 3D modeling and rendering.
{"title":"Panorama image interpolation for real-time walkthrough","authors":"N. Kawai, Cédric Audras, Sou Tabata, Takahiro Matsubara","doi":"10.1145/2945078.2945111","DOIUrl":"https://doi.org/10.1145/2945078.2945111","url":null,"abstract":"We propose a method to generate new views of a scene by capturing a few panorama images in real space and interpolating captured images. We describe a procedure for interpolating panoramas captured at four corners of a rectangle area without geometry, and present experimental results including walkthrough in real time. Our image-based method enables walking through space much more easily than using 3D modeling and rendering.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125573046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Y. Saraiji, Shota Sugimoto, C. Fernando, K. Minamizawa, S. Tachi
We propose "Layered Telepresence", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.
{"title":"Layered telepresence: simultaneous multi presence experience using eye gaze based perceptual awareness blending","authors":"M. Y. Saraiji, Shota Sugimoto, C. Fernando, K. Minamizawa, S. Tachi","doi":"10.1145/2945078.2945098","DOIUrl":"https://doi.org/10.1145/2945078.2945098","url":null,"abstract":"We propose \"Layered Telepresence\", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121027828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For stereoscopic augmented reality (AR) system, continuous feature tracking of the observing target is required to generate a virtual object in the real world coordinate. Besides, dual cameras have to be placed with proper distance to obtain correct stereo images for video see-through applications. Both higher resolution and frame rate per second (FPS) can improve the user experience. However, feature tracking could be the bottleneck with high resolution images and the latency would increase if image processing was done before tracking.
{"title":"ThirdEye: a coaxial feature tracking system for stereoscopic video see-through augmented reality","authors":"Yu-Xiang Wang, Yu-Ju Tsai, Yu-Hsuan Huang, Wan-ling Yang, Tzu-Chieh Yu, Yu-Kai Chiu, M. Ouhyoung","doi":"10.1145/2945078.2945100","DOIUrl":"https://doi.org/10.1145/2945078.2945100","url":null,"abstract":"For stereoscopic augmented reality (AR) system, continuous feature tracking of the observing target is required to generate a virtual object in the real world coordinate. Besides, dual cameras have to be placed with proper distance to obtain correct stereo images for video see-through applications. Both higher resolution and frame rate per second (FPS) can improve the user experience. However, feature tracking could be the bottleneck with high resolution images and the latency would increase if image processing was done before tracking.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133554085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, personalized fabrication has attracted much attention due to the greatly improved accessibility of consumer-level 3D printers. However, 3D printers still suffer from the relatively long production time and limited output size, which are undesirable for large-scale rapid-prototyping. Zometool, which is a popular building block system widely used for education and entertainment, is potentially suitable for providing an alternative solution to the aforementioned scenarios. However, even for 3D models of moderate complexity, novice users may still have difficulty in building visually plausible results by themselves. Therefore, the goal of this work is to develop an automatic system to assist users to realize Zometool rapid prototyping with a specified 3D shape. Compared with the previous work [Zimmer and Kobbelt 2014], our method may achieve the ease of assembly and economic usage of building units since we focus on generating the Zometool structures through a higher level of shape abstraction.
近年来,由于消费级3D打印机的可访问性大大提高,个性化制造引起了人们的广泛关注。然而,3D打印机仍然存在生产时间较长和输出尺寸有限的问题,这对于大规模快速成型来说是不可取的。Zometool是一种广泛用于教育和娱乐的流行构建块系统,它可能适合为上述场景提供替代解决方案。然而,即使对于中等复杂性的3D模型,新手用户可能仍然难以自己构建视觉上可信的结果。因此,本工作的目标是开发一个自动化系统,以帮助用户实现具有指定3D形状的Zometool快速成型。与之前的工作相比[Zimmer and Kobbelt 2014],我们的方法可以实现建筑单元的易于组装和经济使用,因为我们专注于通过更高层次的形状抽象来生成Zometool结构。
{"title":"Large-scale rapid-prototyping with zometool","authors":"Chun-Kai Huang, Tsung-Hung Wu, Yi-Ling Chen, Bing-Yu Chen","doi":"10.1145/2945078.2945155","DOIUrl":"https://doi.org/10.1145/2945078.2945155","url":null,"abstract":"In recent years, personalized fabrication has attracted much attention due to the greatly improved accessibility of consumer-level 3D printers. However, 3D printers still suffer from the relatively long production time and limited output size, which are undesirable for large-scale rapid-prototyping. Zometool, which is a popular building block system widely used for education and entertainment, is potentially suitable for providing an alternative solution to the aforementioned scenarios. However, even for 3D models of moderate complexity, novice users may still have difficulty in building visually plausible results by themselves. Therefore, the goal of this work is to develop an automatic system to assist users to realize Zometool rapid prototyping with a specified 3D shape. Compared with the previous work [Zimmer and Kobbelt 2014], our method may achieve the ease of assembly and economic usage of building units since we focus on generating the Zometool structures through a higher level of shape abstraction.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130323046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroko Iwasaki, Momoko Kondo, Rei Ito, Saya Sugiura, Yuka Oba, S. Mizuno
In this paper, we propose a method to interact with virtual shadows through real shadows various physical objects by using two projectors. In our method, the system scans physical objects in front of a projector, generates virtual shadows with CG according to the scan data, and superimposes the virtual shadows to real shadows of the physical objects with the projector. Another projector is used to superimpose virtual light sources inside real shadows. Our method enables us to experience novel interaction with various shadows such as shadows of flower arrangements.
{"title":"Interaction with virtual shadow through real shadow using two projectors","authors":"Hiroko Iwasaki, Momoko Kondo, Rei Ito, Saya Sugiura, Yuka Oba, S. Mizuno","doi":"10.1145/2945078.2945121","DOIUrl":"https://doi.org/10.1145/2945078.2945121","url":null,"abstract":"In this paper, we propose a method to interact with virtual shadows through real shadows various physical objects by using two projectors. In our method, the system scans physical objects in front of a projector, generates virtual shadows with CG according to the scan data, and superimposes the virtual shadows to real shadows of the physical objects with the projector. Another projector is used to superimpose virtual light sources inside real shadows. Our method enables us to experience novel interaction with various shadows such as shadows of flower arrangements.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"01 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130435888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new storage scheme for computer graphic images based on OpenEXR 2. Using such EXR/Id files, the compositing artist can isolate an object selection (by picking them or using a regular expression to match their names) and color corrects them with no edge artefact, which was not possible to achieve without rendering the object selection on its own layer. Using this file format avoids going back and forth between the rendering and the compositing departments because no mask image or layering are needed anymore. The technique is demonstrated in an open source software suite, including a library to read and write the EXR/Id files and an OpenFX plug-in which generates the images in any compositing software.
{"title":"OpenEXR/Id isolate any object with a perfect antialiasing","authors":"Cyril Corvazier, B. Legros, Rachid Chikh","doi":"10.1145/2945078.2945136","DOIUrl":"https://doi.org/10.1145/2945078.2945136","url":null,"abstract":"We present a new storage scheme for computer graphic images based on OpenEXR 2. Using such EXR/Id files, the compositing artist can isolate an object selection (by picking them or using a regular expression to match their names) and color corrects them with no edge artefact, which was not possible to achieve without rendering the object selection on its own layer. Using this file format avoids going back and forth between the rendering and the compositing departments because no mask image or layering are needed anymore. The technique is demonstrated in an open source software suite, including a library to read and write the EXR/Id files and an OpenFX plug-in which generates the images in any compositing software.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116770802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro Rossa, Nicolas Hoffman, João Ricardo Bittencourt, Fernando P. Marson, V. Cassol
In this work we explore the use of games and VR in order to collaborate with History teaching in Brazil. We develop a game and a VR experience based on local technology. In or approach the player is considered as an Indian who lived in the Jesuitical Reductions in the South of Brazil and was requested to practice bow and arrow shooting.
{"title":"Living the past: the use of VR to provide a historical experience","authors":"Pedro Rossa, Nicolas Hoffman, João Ricardo Bittencourt, Fernando P. Marson, V. Cassol","doi":"10.1145/2945078.2945169","DOIUrl":"https://doi.org/10.1145/2945078.2945169","url":null,"abstract":"In this work we explore the use of games and VR in order to collaborate with History teaching in Brazil. We develop a game and a VR experience based on local technology. In or approach the player is considered as an Indian who lived in the Jesuitical Reductions in the South of Brazil and was requested to practice bow and arrow shooting.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122405936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we have developed an approach to include global illumination effects into charcoal drawing (see Figure 1). Our charcoal shader provides a robust computation to obtain charcoal effect for a wide variety of diffuse and specular materials. Our contributions can be summarized as follows: (1) A Barrycentric shader that is based on degree zero B-spline basis functions; (2) A set of hand-drawn charcoal control texture images that naturally provide desired charcoal look-and-feel; and (3) A painter's hierarchy for handling a high number of shading parameters consistent with charcoal drawing.
{"title":"Charcoal rendering and shading with reflections","authors":"Yuxiao Du, E. Akleman","doi":"10.1145/2945078.2945110","DOIUrl":"https://doi.org/10.1145/2945078.2945110","url":null,"abstract":"In this work, we have developed an approach to include global illumination effects into charcoal drawing (see Figure 1). Our charcoal shader provides a robust computation to obtain charcoal effect for a wide variety of diffuse and specular materials. Our contributions can be summarized as follows: (1) A Barrycentric shader that is based on degree zero B-spline basis functions; (2) A set of hand-drawn charcoal control texture images that naturally provide desired charcoal look-and-feel; and (3) A painter's hierarchy for handling a high number of shading parameters consistent with charcoal drawing.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127848567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mie Sato, Sota Suzuki, Daiki Ebihara, Sho Kato, Sato Ishigaki
Bare hand interaction with a virtual object reduces uncomfortableness with devices mounted on a user's hand. There are some studies on the bare hand interaction[Benko et al. 2012], however a virtual object is supposed to be a hard object or a user touches a physical object during the bare hand interaction. We focus on grasping a virtual object without using any physical object. Grasping is one of the basic movements in manipulating an object and is more difficult than simple movements like touching an object. Because of the bare hand interaction with no physical object, there is no haptic device on a user's hand and so there is no physical feedback to the user. Our challenge is to provide a user with pseudo-softness while grasping a virtual object with a bare hand. We have been developing an AR system that makes it possible for a user to grasp a virtual object with a bare hand[Suzuki et al. 2014]. Using this AR system, we propose visual stimuli that correspond with the user's hand movements, to manipulate the pseudo-softness of a virtual object. Evaluation results show that with the visual stimuli a user feels pseudo-softness while grasping a virtual object with a bare hand.
徒手与虚拟物体进行交互,可以减少安装在用户手上的设备的不舒适感。有一些关于徒手交互的研究[Benko et al. 2012],然而虚拟对象应该是一个坚硬的物体,或者用户在徒手交互过程中触摸一个物理对象。我们专注于在不使用任何物理对象的情况下抓取虚拟对象。抓握是操纵物体的基本动作之一,比触摸物体等简单动作要困难得多。因为徒手交互没有物理对象,用户手上没有触觉设备,所以没有物理反馈给用户。我们的挑战是在徒手抓取虚拟物体时为用户提供伪柔软度。我们一直在开发一种增强现实系统,使用户可以徒手抓住虚拟物体[Suzuki et al. 2014]。使用这个增强现实系统,我们提出了与用户的手部运动相对应的视觉刺激,以操纵虚拟物体的伪柔软度。评价结果表明,在视觉刺激下,用户在徒手抓取虚拟物体时产生了伪柔软感。
{"title":"Pseudo-softness evaluation in grasping a virtual object with a bare hand","authors":"Mie Sato, Sota Suzuki, Daiki Ebihara, Sho Kato, Sato Ishigaki","doi":"10.1145/2945078.2945118","DOIUrl":"https://doi.org/10.1145/2945078.2945118","url":null,"abstract":"Bare hand interaction with a virtual object reduces uncomfortableness with devices mounted on a user's hand. There are some studies on the bare hand interaction[Benko et al. 2012], however a virtual object is supposed to be a hard object or a user touches a physical object during the bare hand interaction. We focus on grasping a virtual object without using any physical object. Grasping is one of the basic movements in manipulating an object and is more difficult than simple movements like touching an object. Because of the bare hand interaction with no physical object, there is no haptic device on a user's hand and so there is no physical feedback to the user. Our challenge is to provide a user with pseudo-softness while grasping a virtual object with a bare hand. We have been developing an AR system that makes it possible for a user to grasp a virtual object with a bare hand[Suzuki et al. 2014]. Using this AR system, we propose visual stimuli that correspond with the user's hand movements, to manipulate the pseudo-softness of a virtual object. Evaluation results show that with the visual stimuli a user feels pseudo-softness while grasping a virtual object with a bare hand.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115778218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}