Human sensory system is a complex mechanism, providing us with a wealth of data from our environment. Our nervous system constantly updates our awareness of the environment based on this multisensory input. We are attuned to cues, which may alert of a danger, or invite for closer inspection. We present the first integrated mobile platform with state-of-the-art visual, aural and haptic augmentation interfaces, supporting localization and directionality where applicable. With these interfaces, we convey cues to our users in the context of urban cultural experiences. We discuss the orchestration of such multimodal outputs and provide indicative guidelines based on our work.
{"title":"Mobile multisensory augmentations with the CultAR platform","authors":"Antti Nurminen","doi":"10.1145/2818427.2818457","DOIUrl":"https://doi.org/10.1145/2818427.2818457","url":null,"abstract":"Human sensory system is a complex mechanism, providing us with a wealth of data from our environment. Our nervous system constantly updates our awareness of the environment based on this multisensory input. We are attuned to cues, which may alert of a danger, or invite for closer inspection. We present the first integrated mobile platform with state-of-the-art visual, aural and haptic augmentation interfaces, supporting localization and directionality where applicable. With these interfaces, we convey cues to our users in the context of urban cultural experiences. We discuss the orchestration of such multimodal outputs and provide indicative guidelines based on our work.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116925577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuyang Wang, Yangdong Deng, Guiju Zhang, Zhihua Wang
Light field 3-D displays enable stereoscopic visual experience by simultaneously delivering multiple images corresponding to varying viewpoints to a viewer. When used in a near-eye wearable display setup, the light field display is able to offer a richer set of depth cues than the conventional binocular parallax based 3-D displays. The problem, however, is that the multi-view rendering on a single display inevitably leads to a reduced resolution that can be perceived. In this work, we propose a novel ray tracing based resolution enhancement framework for light field displays. In our approach, the multi-view light field is rendered with a ray-tracing engine, while the enhancement of the apparent resolution is achieved by generating a sequence of specifically designed images for each frame and displaying the images at a higher refreshing rate. The synthesis of the images is aimed to create the same visual perception results a high-resolution image does. By shooting several rays toward each pixel in a manner similar to anti-aliasing, the synthesis process can be seamlessly integrated into a ray tracing flow. The proposed algorithm was implemented on a near-eye light field display system. Experimental results as well as theoretic analysis and subjective evaluations proved the effectiveness of the proposed algorithms.
{"title":"Apparent resolution enhancement for near-eye light field display","authors":"Xuyang Wang, Yangdong Deng, Guiju Zhang, Zhihua Wang","doi":"10.1145/2818427.2818441","DOIUrl":"https://doi.org/10.1145/2818427.2818441","url":null,"abstract":"Light field 3-D displays enable stereoscopic visual experience by simultaneously delivering multiple images corresponding to varying viewpoints to a viewer. When used in a near-eye wearable display setup, the light field display is able to offer a richer set of depth cues than the conventional binocular parallax based 3-D displays. The problem, however, is that the multi-view rendering on a single display inevitably leads to a reduced resolution that can be perceived. In this work, we propose a novel ray tracing based resolution enhancement framework for light field displays. In our approach, the multi-view light field is rendered with a ray-tracing engine, while the enhancement of the apparent resolution is achieved by generating a sequence of specifically designed images for each frame and displaying the images at a higher refreshing rate. The synthesis of the images is aimed to create the same visual perception results a high-resolution image does. By shooting several rays toward each pixel in a manner similar to anti-aliasing, the synthesis process can be seamlessly integrated into a ray tracing flow. The proposed algorithm was implemented on a near-eye light field display system. Experimental results as well as theoretic analysis and subjective evaluations proved the effectiveness of the proposed algorithms.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125861656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alaeddin Nassani, Huidong Bai, Gun A. Lee, M. Billinghurst
In this paper we describe a wearable system that allows people to place and interact with 3D virtual tags placed around them. This uses two wearable technologies: a head-worn wearable computer (Google Glass) and a chest-worn depth sensor (Tango). The Google Glass is used to generate and display virtual information to the user, while the Tango is used to provide robust indoor position tracking for the Glass. The Tango enables spatial awareness of the surrounding world using various motion sensors including 3D depth sensing, an accelerometer and a motion tracking camera. Using these systems together allows users to create a virtual tag via voice input and then register this tag to a physical object or position in 3D space as an augmented annotation. We describe the design and implementation of the system, user feedback, research implications, and directions for future work.
{"title":"Tag it!: AR annotation using wearable sensors","authors":"Alaeddin Nassani, Huidong Bai, Gun A. Lee, M. Billinghurst","doi":"10.1145/2818427.2818438","DOIUrl":"https://doi.org/10.1145/2818427.2818438","url":null,"abstract":"In this paper we describe a wearable system that allows people to place and interact with 3D virtual tags placed around them. This uses two wearable technologies: a head-worn wearable computer (Google Glass) and a chest-worn depth sensor (Tango). The Google Glass is used to generate and display virtual information to the user, while the Tango is used to provide robust indoor position tracking for the Glass. The Tango enables spatial awareness of the surrounding world using various motion sensors including 3D depth sensing, an accelerometer and a motion tracking camera. Using these systems together allows users to create a virtual tag via voice input and then register this tag to a physical object or position in 3D space as an augmented annotation. We describe the design and implementation of the system, user feedback, research implications, and directions for future work.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122811978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ohta, Shunsuke Nagano, Koichi Nagata, K. Yamashita
In recent years, support for "disadvantaged shoppers" has been actively considered in Japan. Disadvantaged shoppers, that is, people who feel difficulty in shopping, means not only senior citizens living in rural districts, but also people who want to have enough free time to go shopping but cannot do so because of their jobs or the demands of family care and nurturing.
{"title":"Mixed-reality web shopping system using panoramic view inside real store","authors":"M. Ohta, Shunsuke Nagano, Koichi Nagata, K. Yamashita","doi":"10.1145/2818427.2818456","DOIUrl":"https://doi.org/10.1145/2818427.2818456","url":null,"abstract":"In recent years, support for \"disadvantaged shoppers\" has been actively considered in Japan. Disadvantaged shoppers, that is, people who feel difficulty in shopping, means not only senior citizens living in rural districts, but also people who want to have enough free time to go shopping but cannot do so because of their jobs or the demands of family care and nurturing.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122188997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Papaefthymiou, Andrew W. Feng, Ari Shapiro, G. Papagiannakis
In this work we present a complete methodology for robust authoring of AR virtual characters powered from a versatile character animation framework (Smartbody), using only mobile devices. We can author, fully augment with life-size, animated, geometrically accurately registered virtual characters into any open space in less than 1 minute with only modern smartphones or tablets and then automatically revive this augmentation for subsequent activations from the same spot, in under a few seconds. Also, we handle efficiently scene authoring rotations of the AR objects using Geometric Algebra rotors in order to extract higher quality visual results. Moreover, we have implemented a mobile version of the global illumination for real-time Precomputed Radiance Transfer algorithm for diffuse shadowed characters in real-time, using High Dynamic Range (HDR) environment maps integrated in our open-source OpenGL Geometric Application (glGA) framework. Effective character interaction plays fundamental role in attaining high level of believability and makes the AR application more attractive and immersive based on the SmartBody framework.
{"title":"A fast and robust pipeline for populating mobile AR scenes with gamified virtual characters","authors":"M. Papaefthymiou, Andrew W. Feng, Ari Shapiro, G. Papagiannakis","doi":"10.1145/2818427.2818463","DOIUrl":"https://doi.org/10.1145/2818427.2818463","url":null,"abstract":"In this work we present a complete methodology for robust authoring of AR virtual characters powered from a versatile character animation framework (Smartbody), using only mobile devices. We can author, fully augment with life-size, animated, geometrically accurately registered virtual characters into any open space in less than 1 minute with only modern smartphones or tablets and then automatically revive this augmentation for subsequent activations from the same spot, in under a few seconds. Also, we handle efficiently scene authoring rotations of the AR objects using Geometric Algebra rotors in order to extract higher quality visual results. Moreover, we have implemented a mobile version of the global illumination for real-time Precomputed Radiance Transfer algorithm for diffuse shadowed characters in real-time, using High Dynamic Range (HDR) environment maps integrated in our open-source OpenGL Geometric Application (glGA) framework. Effective character interaction plays fundamental role in attaining high level of believability and makes the AR application more attractive and immersive based on the SmartBody framework.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130109484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hwang, J. D. Lee, Youngsam Shin, Won-Jong Lee, Soojung Ryu
This paper presents optimization techniques devised to a hardware ray tracing engine which has been developed for mobile platforms. Whereas conventional designs deal with either fixed-point or floating-point numbers, the proposed techniques are based on hybrid number representations with fixed-point and floating-point ones. Carefully mixing the two heterogeneous number representations in computation and value encoding could improve efficiency of the ray tracing engine in terms of both energy and silicon area. Compared to a floating-point-based design, 35% and 16% area reduction was achieved in ray-box and ray-triangle intersection units, respectively. In addition, such hybrid representation could encode a bounding box in 40% smaller space at a reasonably low cost.
{"title":"A mobile ray tracing engine with hybrid number representations","authors":"S. Hwang, J. D. Lee, Youngsam Shin, Won-Jong Lee, Soojung Ryu","doi":"10.1145/2818427.2818446","DOIUrl":"https://doi.org/10.1145/2818427.2818446","url":null,"abstract":"This paper presents optimization techniques devised to a hardware ray tracing engine which has been developed for mobile platforms. Whereas conventional designs deal with either fixed-point or floating-point numbers, the proposed techniques are based on hybrid number representations with fixed-point and floating-point ones. Carefully mixing the two heterogeneous number representations in computation and value encoding could improve efficiency of the ray tracing engine in terms of both energy and silicon area. Compared to a floating-point-based design, 35% and 16% area reduction was achieved in ray-box and ray-triangle intersection units, respectively. In addition, such hybrid representation could encode a bounding box in 40% smaller space at a reasonably low cost.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129620249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryosuke Ichikari, Ryo Yamashita, K. Thangamani, T. Kurata
The objective of this study is to develop a 3D application that provides an up-to-date virtual experience of the Kesennuma-Yokocho food stall village. This app aims to enable users to feel the "here and now" atmosphere of the site, even from a remote location. To this end, we integrate visualizations with 3D-CG models and articles on social media to keep the contents fresh. Using social media allows a user to check the status of the site, which changes each day. However, there could be too much information posted on social media. In this research, we propose a filtering method to estimate the freshness of each article, based on timestamps and text data including date descriptions. These up-to-date articles can be superimposed on the visualization of photorealistic 3D-CG models which also can be updated with reasonable costs.
{"title":"Up-to-date virtual UX of the Kesennuma-Yokocho food stall village: integration with social media","authors":"Ryosuke Ichikari, Ryo Yamashita, K. Thangamani, T. Kurata","doi":"10.1145/2818427.2818450","DOIUrl":"https://doi.org/10.1145/2818427.2818450","url":null,"abstract":"The objective of this study is to develop a 3D application that provides an up-to-date virtual experience of the Kesennuma-Yokocho food stall village. This app aims to enable users to feel the \"here and now\" atmosphere of the site, even from a remote location. To this end, we integrate visualizations with 3D-CG models and articles on social media to keep the contents fresh. Using social media allows a user to check the status of the site, which changes each day. However, there could be too much information posted on social media. In this research, we propose a filtering method to estimate the freshness of each article, based on timestamps and text data including date descriptions. These up-to-date articles can be superimposed on the visualization of photorealistic 3D-CG models which also can be updated with reasonable costs.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132100173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the design and implementation of a prototype consisting of several mobile devices that allow for multi user exploration of dynamically visualised graphs of large data sets in a mixed reality environment. Tablet devices are used to represent nodes and placed on a table as augmented reality tracking targets. From these nodes a graph is dynamically loaded and visualised in mixed reality space. Multiple users can interact with the graph through further mobile devices acting as magic lenses. We explore different interaction methods for basic graph exploration tasks based on the previous research in interactive graph exploration.
{"title":"Collaborative magic lens graph exploration","authors":"Daniel Drochtert, C. Geiger","doi":"10.1145/2818427.2818465","DOIUrl":"https://doi.org/10.1145/2818427.2818465","url":null,"abstract":"We present the design and implementation of a prototype consisting of several mobile devices that allow for multi user exploration of dynamically visualised graphs of large data sets in a mixed reality environment. Tablet devices are used to represent nodes and placed on a table as augmented reality tracking targets. From these nodes a graph is dynamically loaded and visualised in mixed reality space. Multiple users can interact with the graph through further mobile devices acting as magic lenses. We explore different interaction methods for basic graph exploration tasks based on the previous research in interactive graph exploration.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132177537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the potential of mobile information visualisation is widely recognized, there is still relatively little research in this area and few practical guidelines for the design of mobile information visualisation interfaces. Indeed, it would appear that there is still a general feeling in the interface design community that mobile visualisation should be limited to simple operations and small scale data. Information visualisation research has concentrated thus far on desktop PCs and larger displays while interfaces for more compact mobile device have been neglected. This is in spite of the increasing popularity and widespread use of smart-phones and other new mobile technologies. In this paper we address this issue by developing a set of low-level interface design guidelines for mobile information visualisation development. This is done by considering a basic set of interactions and relating these to mobile device limitations. Our results suggest that the mindful application of existing information visualisation techniques can overcome many mobile device limitations and that proper implementation of interaction mechanisms and animated view transitions are key to effective mobile information visualisation. This is illustrated with case studies looking at a coordinated map and timeline interface for geo-temporal data, a distorted scatter-plot, and a space filling hierarchy view.
{"title":"Interactive animated mobile information visualisation","authors":"Paul Craig","doi":"10.1145/2818427.2818458","DOIUrl":"https://doi.org/10.1145/2818427.2818458","url":null,"abstract":"While the potential of mobile information visualisation is widely recognized, there is still relatively little research in this area and few practical guidelines for the design of mobile information visualisation interfaces. Indeed, it would appear that there is still a general feeling in the interface design community that mobile visualisation should be limited to simple operations and small scale data. Information visualisation research has concentrated thus far on desktop PCs and larger displays while interfaces for more compact mobile device have been neglected. This is in spite of the increasing popularity and widespread use of smart-phones and other new mobile technologies. In this paper we address this issue by developing a set of low-level interface design guidelines for mobile information visualisation development. This is done by considering a basic set of interactions and relating these to mobile device limitations. Our results suggest that the mindful application of existing information visualisation techniques can overcome many mobile device limitations and that proper implementation of interaction mechanisms and animated view transitions are key to effective mobile information visualisation. This is illustrated with case studies looking at a coordinated map and timeline interface for geo-temporal data, a distorted scatter-plot, and a space filling hierarchy view.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114235071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We developed MovieTile, a system for constructing multi-display environments using multiple mobile devices. The system offers a simple and intuitive interface for configuring a display arrangement. It enables the use of devices of different screen sizes mixed in forming a single virtual screen of free shape. The system delivers a movie file to all devices, then the movie is played so that it fills the entire screen. We expect that this system can offer opportunities to use free shape screens much more easily. The system consists of two applications: one for a controller and the other for screen devices. This report describes the design and mechanism of the configuration interface and systems.
{"title":"MovieTile: interactively adjustable free shape multi-display of mobile devices","authors":"Takashige Ohta, Jun Tanaka","doi":"10.1145/2818427.2818436","DOIUrl":"https://doi.org/10.1145/2818427.2818436","url":null,"abstract":"We developed MovieTile, a system for constructing multi-display environments using multiple mobile devices. The system offers a simple and intuitive interface for configuring a display arrangement. It enables the use of devices of different screen sizes mixed in forming a single virtual screen of free shape. The system delivers a movie file to all devices, then the movie is played so that it fills the entire screen. We expect that this system can offer opportunities to use free shape screens much more easily. The system consists of two applications: one for a controller and the other for screen devices. This report describes the design and mechanism of the configuration interface and systems.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}