Human sensory system is a complex mechanism, providing us with a wealth of data from our environment. Our nervous system constantly updates our awareness of the environment based on this multisensory input. We are attuned to cues, which may alert of a danger, or invite for closer inspection. We present the first integrated mobile platform with state-of-the-art visual, aural and haptic augmentation interfaces, supporting localization and directionality where applicable. With these interfaces, we convey cues to our users in the context of urban cultural experiences. We discuss the orchestration of such multimodal outputs and provide indicative guidelines based on our work.
{"title":"Mobile multisensory augmentations with the CultAR platform","authors":"Antti Nurminen","doi":"10.1145/2818427.2818457","DOIUrl":"https://doi.org/10.1145/2818427.2818457","url":null,"abstract":"Human sensory system is a complex mechanism, providing us with a wealth of data from our environment. Our nervous system constantly updates our awareness of the environment based on this multisensory input. We are attuned to cues, which may alert of a danger, or invite for closer inspection. We present the first integrated mobile platform with state-of-the-art visual, aural and haptic augmentation interfaces, supporting localization and directionality where applicable. With these interfaces, we convey cues to our users in the context of urban cultural experiences. We discuss the orchestration of such multimodal outputs and provide indicative guidelines based on our work.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116925577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuyang Wang, Yangdong Deng, Guiju Zhang, Zhihua Wang
Light field 3-D displays enable stereoscopic visual experience by simultaneously delivering multiple images corresponding to varying viewpoints to a viewer. When used in a near-eye wearable display setup, the light field display is able to offer a richer set of depth cues than the conventional binocular parallax based 3-D displays. The problem, however, is that the multi-view rendering on a single display inevitably leads to a reduced resolution that can be perceived. In this work, we propose a novel ray tracing based resolution enhancement framework for light field displays. In our approach, the multi-view light field is rendered with a ray-tracing engine, while the enhancement of the apparent resolution is achieved by generating a sequence of specifically designed images for each frame and displaying the images at a higher refreshing rate. The synthesis of the images is aimed to create the same visual perception results a high-resolution image does. By shooting several rays toward each pixel in a manner similar to anti-aliasing, the synthesis process can be seamlessly integrated into a ray tracing flow. The proposed algorithm was implemented on a near-eye light field display system. Experimental results as well as theoretic analysis and subjective evaluations proved the effectiveness of the proposed algorithms.
{"title":"Apparent resolution enhancement for near-eye light field display","authors":"Xuyang Wang, Yangdong Deng, Guiju Zhang, Zhihua Wang","doi":"10.1145/2818427.2818441","DOIUrl":"https://doi.org/10.1145/2818427.2818441","url":null,"abstract":"Light field 3-D displays enable stereoscopic visual experience by simultaneously delivering multiple images corresponding to varying viewpoints to a viewer. When used in a near-eye wearable display setup, the light field display is able to offer a richer set of depth cues than the conventional binocular parallax based 3-D displays. The problem, however, is that the multi-view rendering on a single display inevitably leads to a reduced resolution that can be perceived. In this work, we propose a novel ray tracing based resolution enhancement framework for light field displays. In our approach, the multi-view light field is rendered with a ray-tracing engine, while the enhancement of the apparent resolution is achieved by generating a sequence of specifically designed images for each frame and displaying the images at a higher refreshing rate. The synthesis of the images is aimed to create the same visual perception results a high-resolution image does. By shooting several rays toward each pixel in a manner similar to anti-aliasing, the synthesis process can be seamlessly integrated into a ray tracing flow. The proposed algorithm was implemented on a near-eye light field display system. Experimental results as well as theoretic analysis and subjective evaluations proved the effectiveness of the proposed algorithms.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125861656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alaeddin Nassani, Huidong Bai, Gun A. Lee, M. Billinghurst
In this paper we describe a wearable system that allows people to place and interact with 3D virtual tags placed around them. This uses two wearable technologies: a head-worn wearable computer (Google Glass) and a chest-worn depth sensor (Tango). The Google Glass is used to generate and display virtual information to the user, while the Tango is used to provide robust indoor position tracking for the Glass. The Tango enables spatial awareness of the surrounding world using various motion sensors including 3D depth sensing, an accelerometer and a motion tracking camera. Using these systems together allows users to create a virtual tag via voice input and then register this tag to a physical object or position in 3D space as an augmented annotation. We describe the design and implementation of the system, user feedback, research implications, and directions for future work.
{"title":"Tag it!: AR annotation using wearable sensors","authors":"Alaeddin Nassani, Huidong Bai, Gun A. Lee, M. Billinghurst","doi":"10.1145/2818427.2818438","DOIUrl":"https://doi.org/10.1145/2818427.2818438","url":null,"abstract":"In this paper we describe a wearable system that allows people to place and interact with 3D virtual tags placed around them. This uses two wearable technologies: a head-worn wearable computer (Google Glass) and a chest-worn depth sensor (Tango). The Google Glass is used to generate and display virtual information to the user, while the Tango is used to provide robust indoor position tracking for the Glass. The Tango enables spatial awareness of the surrounding world using various motion sensors including 3D depth sensing, an accelerometer and a motion tracking camera. Using these systems together allows users to create a virtual tag via voice input and then register this tag to a physical object or position in 3D space as an augmented annotation. We describe the design and implementation of the system, user feedback, research implications, and directions for future work.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122811978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ohta, Shunsuke Nagano, Koichi Nagata, K. Yamashita
In recent years, support for "disadvantaged shoppers" has been actively considered in Japan. Disadvantaged shoppers, that is, people who feel difficulty in shopping, means not only senior citizens living in rural districts, but also people who want to have enough free time to go shopping but cannot do so because of their jobs or the demands of family care and nurturing.
{"title":"Mixed-reality web shopping system using panoramic view inside real store","authors":"M. Ohta, Shunsuke Nagano, Koichi Nagata, K. Yamashita","doi":"10.1145/2818427.2818456","DOIUrl":"https://doi.org/10.1145/2818427.2818456","url":null,"abstract":"In recent years, support for \"disadvantaged shoppers\" has been actively considered in Japan. Disadvantaged shoppers, that is, people who feel difficulty in shopping, means not only senior citizens living in rural districts, but also people who want to have enough free time to go shopping but cannot do so because of their jobs or the demands of family care and nurturing.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122188997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Papaefthymiou, Andrew W. Feng, Ari Shapiro, G. Papagiannakis
In this work we present a complete methodology for robust authoring of AR virtual characters powered from a versatile character animation framework (Smartbody), using only mobile devices. We can author, fully augment with life-size, animated, geometrically accurately registered virtual characters into any open space in less than 1 minute with only modern smartphones or tablets and then automatically revive this augmentation for subsequent activations from the same spot, in under a few seconds. Also, we handle efficiently scene authoring rotations of the AR objects using Geometric Algebra rotors in order to extract higher quality visual results. Moreover, we have implemented a mobile version of the global illumination for real-time Precomputed Radiance Transfer algorithm for diffuse shadowed characters in real-time, using High Dynamic Range (HDR) environment maps integrated in our open-source OpenGL Geometric Application (glGA) framework. Effective character interaction plays fundamental role in attaining high level of believability and makes the AR application more attractive and immersive based on the SmartBody framework.
{"title":"A fast and robust pipeline for populating mobile AR scenes with gamified virtual characters","authors":"M. Papaefthymiou, Andrew W. Feng, Ari Shapiro, G. Papagiannakis","doi":"10.1145/2818427.2818463","DOIUrl":"https://doi.org/10.1145/2818427.2818463","url":null,"abstract":"In this work we present a complete methodology for robust authoring of AR virtual characters powered from a versatile character animation framework (Smartbody), using only mobile devices. We can author, fully augment with life-size, animated, geometrically accurately registered virtual characters into any open space in less than 1 minute with only modern smartphones or tablets and then automatically revive this augmentation for subsequent activations from the same spot, in under a few seconds. Also, we handle efficiently scene authoring rotations of the AR objects using Geometric Algebra rotors in order to extract higher quality visual results. Moreover, we have implemented a mobile version of the global illumination for real-time Precomputed Radiance Transfer algorithm for diffuse shadowed characters in real-time, using High Dynamic Range (HDR) environment maps integrated in our open-source OpenGL Geometric Application (glGA) framework. Effective character interaction plays fundamental role in attaining high level of believability and makes the AR application more attractive and immersive based on the SmartBody framework.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130109484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hwang, J. D. Lee, Youngsam Shin, Won-Jong Lee, Soojung Ryu
This paper presents optimization techniques devised to a hardware ray tracing engine which has been developed for mobile platforms. Whereas conventional designs deal with either fixed-point or floating-point numbers, the proposed techniques are based on hybrid number representations with fixed-point and floating-point ones. Carefully mixing the two heterogeneous number representations in computation and value encoding could improve efficiency of the ray tracing engine in terms of both energy and silicon area. Compared to a floating-point-based design, 35% and 16% area reduction was achieved in ray-box and ray-triangle intersection units, respectively. In addition, such hybrid representation could encode a bounding box in 40% smaller space at a reasonably low cost.
{"title":"A mobile ray tracing engine with hybrid number representations","authors":"S. Hwang, J. D. Lee, Youngsam Shin, Won-Jong Lee, Soojung Ryu","doi":"10.1145/2818427.2818446","DOIUrl":"https://doi.org/10.1145/2818427.2818446","url":null,"abstract":"This paper presents optimization techniques devised to a hardware ray tracing engine which has been developed for mobile platforms. Whereas conventional designs deal with either fixed-point or floating-point numbers, the proposed techniques are based on hybrid number representations with fixed-point and floating-point ones. Carefully mixing the two heterogeneous number representations in computation and value encoding could improve efficiency of the ray tracing engine in terms of both energy and silicon area. Compared to a floating-point-based design, 35% and 16% area reduction was achieved in ray-box and ray-triangle intersection units, respectively. In addition, such hybrid representation could encode a bounding box in 40% smaller space at a reasonably low cost.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129620249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryosuke Ichikari, Ryo Yamashita, K. Thangamani, T. Kurata
The objective of this study is to develop a 3D application that provides an up-to-date virtual experience of the Kesennuma-Yokocho food stall village. This app aims to enable users to feel the "here and now" atmosphere of the site, even from a remote location. To this end, we integrate visualizations with 3D-CG models and articles on social media to keep the contents fresh. Using social media allows a user to check the status of the site, which changes each day. However, there could be too much information posted on social media. In this research, we propose a filtering method to estimate the freshness of each article, based on timestamps and text data including date descriptions. These up-to-date articles can be superimposed on the visualization of photorealistic 3D-CG models which also can be updated with reasonable costs.
{"title":"Up-to-date virtual UX of the Kesennuma-Yokocho food stall village: integration with social media","authors":"Ryosuke Ichikari, Ryo Yamashita, K. Thangamani, T. Kurata","doi":"10.1145/2818427.2818450","DOIUrl":"https://doi.org/10.1145/2818427.2818450","url":null,"abstract":"The objective of this study is to develop a 3D application that provides an up-to-date virtual experience of the Kesennuma-Yokocho food stall village. This app aims to enable users to feel the \"here and now\" atmosphere of the site, even from a remote location. To this end, we integrate visualizations with 3D-CG models and articles on social media to keep the contents fresh. Using social media allows a user to check the status of the site, which changes each day. However, there could be too much information posted on social media. In this research, we propose a filtering method to estimate the freshness of each article, based on timestamps and text data including date descriptions. These up-to-date articles can be superimposed on the visualization of photorealistic 3D-CG models which also can be updated with reasonable costs.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132100173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the design and implementation of a prototype consisting of several mobile devices that allow for multi user exploration of dynamically visualised graphs of large data sets in a mixed reality environment. Tablet devices are used to represent nodes and placed on a table as augmented reality tracking targets. From these nodes a graph is dynamically loaded and visualised in mixed reality space. Multiple users can interact with the graph through further mobile devices acting as magic lenses. We explore different interaction methods for basic graph exploration tasks based on the previous research in interactive graph exploration.
{"title":"Collaborative magic lens graph exploration","authors":"Daniel Drochtert, C. Geiger","doi":"10.1145/2818427.2818465","DOIUrl":"https://doi.org/10.1145/2818427.2818465","url":null,"abstract":"We present the design and implementation of a prototype consisting of several mobile devices that allow for multi user exploration of dynamically visualised graphs of large data sets in a mixed reality environment. Tablet devices are used to represent nodes and placed on a table as augmented reality tracking targets. From these nodes a graph is dynamically loaded and visualised in mixed reality space. Multiple users can interact with the graph through further mobile devices acting as magic lenses. We explore different interaction methods for basic graph exploration tasks based on the previous research in interactive graph exploration.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132177537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroyuki Adachi, Akimune Haruna, Seiko Myojin, N. Shimada
In order to enhance communication, various ways for supporting communication have been researched [Terken and Sturm 2010; Bergstrom and Karahalios 2007]. However, most of these works are difficult to set up because these works need special things, for example, having or wearing a microphone, a room equipped with a projector. On the other hand, our system [Adachi et al. 2014] only requires devices with two cameras and a display such as tablets and smartphones since the devices can both sensing and visualizing, and popular, therefore the system has the advantage of being easy to use. In addition, our system can provide different (controlled) information to the individual since each participant has the own display. We consider the system is useful in brainstorming, group meetings, tabletop games with conversation, and so on.
为了加强沟通,人们研究了各种支持沟通的方式[Terken and Sturm 2010;Bergstrom and Karahalios 2007]。但是这些作品大多是很难搭建起来的,因为这些作品需要一些特殊的东西,比如有或者戴着麦克风,房间里有投影仪。另一方面,我们的系统[Adachi et al. 2014]只需要带有两个摄像头和一个显示器的设备,如平板电脑和智能手机,因为这些设备既可以感知又可以可视化,而且很流行,因此该系统具有易于使用的优势。此外,由于每个参与者都有自己的显示器,我们的系统可以为个人提供不同的(受控的)信息。我们认为这个系统在头脑风暴、小组会议、桌面游戏等场合都很有用。
{"title":"ScoringTalk and WatchingMeter: utterance and gaze visualization for co-located collaboration","authors":"Hiroyuki Adachi, Akimune Haruna, Seiko Myojin, N. Shimada","doi":"10.1145/2818427.2818455","DOIUrl":"https://doi.org/10.1145/2818427.2818455","url":null,"abstract":"In order to enhance communication, various ways for supporting communication have been researched [Terken and Sturm 2010; Bergstrom and Karahalios 2007]. However, most of these works are difficult to set up because these works need special things, for example, having or wearing a microphone, a room equipped with a projector. On the other hand, our system [Adachi et al. 2014] only requires devices with two cameras and a display such as tablets and smartphones since the devices can both sensing and visualizing, and popular, therefore the system has the advantage of being easy to use. In addition, our system can provide different (controlled) information to the individual since each participant has the own display. We consider the system is useful in brainstorming, group meetings, tabletop games with conversation, and so on.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133122246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We developed MovieTile, a system for constructing multi-display environments using multiple mobile devices. The system offers a simple and intuitive interface for configuring a display arrangement. It enables the use of devices of different screen sizes mixed in forming a single virtual screen of free shape. The system delivers a movie file to all devices, then the movie is played so that it fills the entire screen. We expect that this system can offer opportunities to use free shape screens much more easily. The system consists of two applications: one for a controller and the other for screen devices. This report describes the design and mechanism of the configuration interface and systems.
{"title":"MovieTile: interactively adjustable free shape multi-display of mobile devices","authors":"Takashige Ohta, Jun Tanaka","doi":"10.1145/2818427.2818436","DOIUrl":"https://doi.org/10.1145/2818427.2818436","url":null,"abstract":"We developed MovieTile, a system for constructing multi-display environments using multiple mobile devices. The system offers a simple and intuitive interface for configuring a display arrangement. It enables the use of devices of different screen sizes mixed in forming a single virtual screen of free shape. The system delivers a movie file to all devices, then the movie is played so that it fills the entire screen. We expect that this system can offer opportunities to use free shape screens much more easily. The system consists of two applications: one for a controller and the other for screen devices. This report describes the design and mechanism of the configuration interface and systems.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}