首页 > 最新文献

SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications最新文献

英文 中文
Mobile multisensory augmentations with the CultAR platform 使用CultAR平台的移动多感官增强
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818457
Antti Nurminen
Human sensory system is a complex mechanism, providing us with a wealth of data from our environment. Our nervous system constantly updates our awareness of the environment based on this multisensory input. We are attuned to cues, which may alert of a danger, or invite for closer inspection. We present the first integrated mobile platform with state-of-the-art visual, aural and haptic augmentation interfaces, supporting localization and directionality where applicable. With these interfaces, we convey cues to our users in the context of urban cultural experiences. We discuss the orchestration of such multimodal outputs and provide indicative guidelines based on our work.
人类的感觉系统是一个复杂的机制,为我们提供了丰富的环境数据。我们的神经系统基于这种多感官输入不断更新我们对环境的感知。我们对提示很敏感,这些提示可能提醒我们有危险,或者邀请我们进行更仔细的检查。我们提出了第一个集成的移动平台,具有最先进的视觉,听觉和触觉增强界面,支持本地化和方向性。通过这些界面,我们在城市文化体验的背景下向用户传达线索。我们讨论这种多模式产出的编排,并根据我们的工作提供指示性指导方针。
{"title":"Mobile multisensory augmentations with the CultAR platform","authors":"Antti Nurminen","doi":"10.1145/2818427.2818457","DOIUrl":"https://doi.org/10.1145/2818427.2818457","url":null,"abstract":"Human sensory system is a complex mechanism, providing us with a wealth of data from our environment. Our nervous system constantly updates our awareness of the environment based on this multisensory input. We are attuned to cues, which may alert of a danger, or invite for closer inspection. We present the first integrated mobile platform with state-of-the-art visual, aural and haptic augmentation interfaces, supporting localization and directionality where applicable. With these interfaces, we convey cues to our users in the context of urban cultural experiences. We discuss the orchestration of such multimodal outputs and provide indicative guidelines based on our work.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116925577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Apparent resolution enhancement for near-eye light field display 近眼光场显示的明显分辨率增强
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818441
Xuyang Wang, Yangdong Deng, Guiju Zhang, Zhihua Wang
Light field 3-D displays enable stereoscopic visual experience by simultaneously delivering multiple images corresponding to varying viewpoints to a viewer. When used in a near-eye wearable display setup, the light field display is able to offer a richer set of depth cues than the conventional binocular parallax based 3-D displays. The problem, however, is that the multi-view rendering on a single display inevitably leads to a reduced resolution that can be perceived. In this work, we propose a novel ray tracing based resolution enhancement framework for light field displays. In our approach, the multi-view light field is rendered with a ray-tracing engine, while the enhancement of the apparent resolution is achieved by generating a sequence of specifically designed images for each frame and displaying the images at a higher refreshing rate. The synthesis of the images is aimed to create the same visual perception results a high-resolution image does. By shooting several rays toward each pixel in a manner similar to anti-aliasing, the synthesis process can be seamlessly integrated into a ray tracing flow. The proposed algorithm was implemented on a near-eye light field display system. Experimental results as well as theoretic analysis and subjective evaluations proved the effectiveness of the proposed algorithms.
光场3-D显示器通过同时向观看者提供不同视点对应的多个图像来实现立体视觉体验。当用于近眼可穿戴显示器设置时,光场显示器能够提供比传统的基于双目视差的3d显示器更丰富的深度线索。然而,问题是,在单个显示器上的多视图渲染不可避免地导致可以感知的分辨率降低。在这项工作中,我们提出了一种新的基于光线追踪的光场显示分辨率增强框架。在我们的方法中,多视图光场是用光线追踪引擎渲染的,而表观分辨率的增强是通过为每帧生成一系列专门设计的图像并以更高的刷新速率显示图像来实现的。这些图像的合成旨在产生与高分辨率图像相同的视觉感知结果。通过以类似于抗混叠的方式向每个像素发射几条光线,合成过程可以无缝地集成到光线跟踪流中。在近眼光场显示系统中实现了该算法。实验结果、理论分析和主观评价证明了所提算法的有效性。
{"title":"Apparent resolution enhancement for near-eye light field display","authors":"Xuyang Wang, Yangdong Deng, Guiju Zhang, Zhihua Wang","doi":"10.1145/2818427.2818441","DOIUrl":"https://doi.org/10.1145/2818427.2818441","url":null,"abstract":"Light field 3-D displays enable stereoscopic visual experience by simultaneously delivering multiple images corresponding to varying viewpoints to a viewer. When used in a near-eye wearable display setup, the light field display is able to offer a richer set of depth cues than the conventional binocular parallax based 3-D displays. The problem, however, is that the multi-view rendering on a single display inevitably leads to a reduced resolution that can be perceived. In this work, we propose a novel ray tracing based resolution enhancement framework for light field displays. In our approach, the multi-view light field is rendered with a ray-tracing engine, while the enhancement of the apparent resolution is achieved by generating a sequence of specifically designed images for each frame and displaying the images at a higher refreshing rate. The synthesis of the images is aimed to create the same visual perception results a high-resolution image does. By shooting several rays toward each pixel in a manner similar to anti-aliasing, the synthesis process can be seamlessly integrated into a ray tracing flow. The proposed algorithm was implemented on a near-eye light field display system. Experimental results as well as theoretic analysis and subjective evaluations proved the effectiveness of the proposed algorithms.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125861656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Tag it!: AR annotation using wearable sensors 标签!:使用可穿戴传感器的AR注释
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818438
Alaeddin Nassani, Huidong Bai, Gun A. Lee, M. Billinghurst
In this paper we describe a wearable system that allows people to place and interact with 3D virtual tags placed around them. This uses two wearable technologies: a head-worn wearable computer (Google Glass) and a chest-worn depth sensor (Tango). The Google Glass is used to generate and display virtual information to the user, while the Tango is used to provide robust indoor position tracking for the Glass. The Tango enables spatial awareness of the surrounding world using various motion sensors including 3D depth sensing, an accelerometer and a motion tracking camera. Using these systems together allows users to create a virtual tag via voice input and then register this tag to a physical object or position in 3D space as an augmented annotation. We describe the design and implementation of the system, user feedback, research implications, and directions for future work.
在本文中,我们描述了一个可穿戴系统,它允许人们放置并与放置在他们周围的3D虚拟标签进行交互。它使用了两种可穿戴技术:头戴式可穿戴电脑(谷歌眼镜)和胸戴式深度传感器(Tango)。谷歌眼镜用于生成并向用户显示虚拟信息,而Tango用于为谷歌眼镜提供强大的室内位置跟踪。Tango通过各种运动传感器,包括3D深度传感器、加速度计和运动跟踪摄像头,实现了对周围世界的空间感知。将这些系统一起使用,用户可以通过语音输入创建虚拟标签,然后将该标签注册到3D空间中的物理对象或位置,作为增强注释。我们描述了系统的设计和实现、用户反馈、研究意义和未来工作的方向。
{"title":"Tag it!: AR annotation using wearable sensors","authors":"Alaeddin Nassani, Huidong Bai, Gun A. Lee, M. Billinghurst","doi":"10.1145/2818427.2818438","DOIUrl":"https://doi.org/10.1145/2818427.2818438","url":null,"abstract":"In this paper we describe a wearable system that allows people to place and interact with 3D virtual tags placed around them. This uses two wearable technologies: a head-worn wearable computer (Google Glass) and a chest-worn depth sensor (Tango). The Google Glass is used to generate and display virtual information to the user, while the Tango is used to provide robust indoor position tracking for the Glass. The Tango enables spatial awareness of the surrounding world using various motion sensors including 3D depth sensing, an accelerometer and a motion tracking camera. Using these systems together allows users to create a virtual tag via voice input and then register this tag to a physical object or position in 3D space as an augmented annotation. We describe the design and implementation of the system, user feedback, research implications, and directions for future work.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122811978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Mixed-reality web shopping system using panoramic view inside real store 混合现实网络购物系统,使用真实商店内的全景视图
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818456
M. Ohta, Shunsuke Nagano, Koichi Nagata, K. Yamashita
In recent years, support for "disadvantaged shoppers" has been actively considered in Japan. Disadvantaged shoppers, that is, people who feel difficulty in shopping, means not only senior citizens living in rural districts, but also people who want to have enough free time to go shopping but cannot do so because of their jobs or the demands of family care and nurturing.
近年来,日本一直在积极考虑对“弱势购物者”的支持。弱势购物者,即购物困难的人群,不仅是指生活在农村地区的老年人,也包括那些希望有足够的空闲时间去购物,但由于工作或家庭照顾和养育的需要而无法这样做的人。
{"title":"Mixed-reality web shopping system using panoramic view inside real store","authors":"M. Ohta, Shunsuke Nagano, Koichi Nagata, K. Yamashita","doi":"10.1145/2818427.2818456","DOIUrl":"https://doi.org/10.1145/2818427.2818456","url":null,"abstract":"In recent years, support for \"disadvantaged shoppers\" has been actively considered in Japan. Disadvantaged shoppers, that is, people who feel difficulty in shopping, means not only senior citizens living in rural districts, but also people who want to have enough free time to go shopping but cannot do so because of their jobs or the demands of family care and nurturing.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122188997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A fast and robust pipeline for populating mobile AR scenes with gamified virtual characters 一个快速和强大的管道填充移动AR场景与游戏化的虚拟角色
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818463
M. Papaefthymiou, Andrew W. Feng, Ari Shapiro, G. Papagiannakis
In this work we present a complete methodology for robust authoring of AR virtual characters powered from a versatile character animation framework (Smartbody), using only mobile devices. We can author, fully augment with life-size, animated, geometrically accurately registered virtual characters into any open space in less than 1 minute with only modern smartphones or tablets and then automatically revive this augmentation for subsequent activations from the same spot, in under a few seconds. Also, we handle efficiently scene authoring rotations of the AR objects using Geometric Algebra rotors in order to extract higher quality visual results. Moreover, we have implemented a mobile version of the global illumination for real-time Precomputed Radiance Transfer algorithm for diffuse shadowed characters in real-time, using High Dynamic Range (HDR) environment maps integrated in our open-source OpenGL Geometric Application (glGA) framework. Effective character interaction plays fundamental role in attaining high level of believability and makes the AR application more attractive and immersive based on the SmartBody framework.
在这项工作中,我们提出了一个完整的方法,用于从多功能角色动画框架(Smartbody)提供强大的AR虚拟角色创作,仅使用移动设备。我们可以在不到1分钟的时间内,用现代智能手机或平板电脑将真人大小的、动画的、几何精确注册的虚拟角色完全增强到任何开放空间,然后在几秒钟内,从同一地点自动恢复这种增强功能,以进行后续激活。此外,我们使用几何代数转子有效地处理AR对象的场景创作旋转,以提取更高质量的视觉结果。此外,我们已经实现了一个移动版本的全局照明的实时预计算辐射传输算法,用于漫射阴影字符的实时,使用高动态范围(HDR)环境地图集成在我们的开源OpenGL几何应用(glGA)框架中。有效的角色互动是获得高可信度的基础,也是基于SmartBody框架的AR应用更具吸引力和沉浸感的基础。
{"title":"A fast and robust pipeline for populating mobile AR scenes with gamified virtual characters","authors":"M. Papaefthymiou, Andrew W. Feng, Ari Shapiro, G. Papagiannakis","doi":"10.1145/2818427.2818463","DOIUrl":"https://doi.org/10.1145/2818427.2818463","url":null,"abstract":"In this work we present a complete methodology for robust authoring of AR virtual characters powered from a versatile character animation framework (Smartbody), using only mobile devices. We can author, fully augment with life-size, animated, geometrically accurately registered virtual characters into any open space in less than 1 minute with only modern smartphones or tablets and then automatically revive this augmentation for subsequent activations from the same spot, in under a few seconds. Also, we handle efficiently scene authoring rotations of the AR objects using Geometric Algebra rotors in order to extract higher quality visual results. Moreover, we have implemented a mobile version of the global illumination for real-time Precomputed Radiance Transfer algorithm for diffuse shadowed characters in real-time, using High Dynamic Range (HDR) environment maps integrated in our open-source OpenGL Geometric Application (glGA) framework. Effective character interaction plays fundamental role in attaining high level of believability and makes the AR application more attractive and immersive based on the SmartBody framework.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130109484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A mobile ray tracing engine with hybrid number representations 一个带有混合数字表示的移动光线追踪引擎
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818446
S. Hwang, J. D. Lee, Youngsam Shin, Won-Jong Lee, Soojung Ryu
This paper presents optimization techniques devised to a hardware ray tracing engine which has been developed for mobile platforms. Whereas conventional designs deal with either fixed-point or floating-point numbers, the proposed techniques are based on hybrid number representations with fixed-point and floating-point ones. Carefully mixing the two heterogeneous number representations in computation and value encoding could improve efficiency of the ray tracing engine in terms of both energy and silicon area. Compared to a floating-point-based design, 35% and 16% area reduction was achieved in ray-box and ray-triangle intersection units, respectively. In addition, such hybrid representation could encode a bounding box in 40% smaller space at a reasonably low cost.
本文介绍了一种针对移动平台开发的硬件光线追踪引擎的优化技术。传统的设计处理定点或浮点数,而提出的技术是基于定点和浮点数的混合数字表示。在计算和数值编码中仔细混合这两种异构数表示,可以提高光线追踪引擎在能量和硅面积方面的效率。与基于浮点数的设计相比,射线盒和射线三角形相交单元的面积分别减少了35%和16%。此外,这种混合表示可以以相当低的成本在较小的40%空间内编码边界框。
{"title":"A mobile ray tracing engine with hybrid number representations","authors":"S. Hwang, J. D. Lee, Youngsam Shin, Won-Jong Lee, Soojung Ryu","doi":"10.1145/2818427.2818446","DOIUrl":"https://doi.org/10.1145/2818427.2818446","url":null,"abstract":"This paper presents optimization techniques devised to a hardware ray tracing engine which has been developed for mobile platforms. Whereas conventional designs deal with either fixed-point or floating-point numbers, the proposed techniques are based on hybrid number representations with fixed-point and floating-point ones. Carefully mixing the two heterogeneous number representations in computation and value encoding could improve efficiency of the ray tracing engine in terms of both energy and silicon area. Compared to a floating-point-based design, 35% and 16% area reduction was achieved in ray-box and ray-triangle intersection units, respectively. In addition, such hybrid representation could encode a bounding box in 40% smaller space at a reasonably low cost.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129620249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Up-to-date virtual UX of the Kesennuma-Yokocho food stall village: integration with social media 气仙沼-横草大排档村的最新虚拟用户体验:与社交媒体的整合
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818450
Ryosuke Ichikari, Ryo Yamashita, K. Thangamani, T. Kurata
The objective of this study is to develop a 3D application that provides an up-to-date virtual experience of the Kesennuma-Yokocho food stall village. This app aims to enable users to feel the "here and now" atmosphere of the site, even from a remote location. To this end, we integrate visualizations with 3D-CG models and articles on social media to keep the contents fresh. Using social media allows a user to check the status of the site, which changes each day. However, there could be too much information posted on social media. In this research, we propose a filtering method to estimate the freshness of each article, based on timestamps and text data including date descriptions. These up-to-date articles can be superimposed on the visualization of photorealistic 3D-CG models which also can be updated with reasonable costs.
本研究的目的是开发一个3D应用程序,提供气仙沼-横草大排档村的最新虚拟体验。这款应用程序旨在让用户即使在遥远的地方也能感受到现场的“此时此地”氛围。为此,我们将可视化与3D-CG模型和社交媒体上的文章相结合,以保持内容的新鲜感。使用社交媒体可以让用户查看网站的状态,这些状态每天都在变化。然而,社交媒体上发布的信息可能太多了。在本研究中,我们提出了一种基于时间戳和包含日期描述的文本数据的过滤方法来估计每篇文章的新鲜度。这些最新的文章可以叠加在逼真的3D-CG模型的可视化上,也可以以合理的成本更新。
{"title":"Up-to-date virtual UX of the Kesennuma-Yokocho food stall village: integration with social media","authors":"Ryosuke Ichikari, Ryo Yamashita, K. Thangamani, T. Kurata","doi":"10.1145/2818427.2818450","DOIUrl":"https://doi.org/10.1145/2818427.2818450","url":null,"abstract":"The objective of this study is to develop a 3D application that provides an up-to-date virtual experience of the Kesennuma-Yokocho food stall village. This app aims to enable users to feel the \"here and now\" atmosphere of the site, even from a remote location. To this end, we integrate visualizations with 3D-CG models and articles on social media to keep the contents fresh. Using social media allows a user to check the status of the site, which changes each day. However, there could be too much information posted on social media. In this research, we propose a filtering method to estimate the freshness of each article, based on timestamps and text data including date descriptions. These up-to-date articles can be superimposed on the visualization of photorealistic 3D-CG models which also can be updated with reasonable costs.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132100173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative magic lens graph exploration 协作魔术镜头图探索
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818465
Daniel Drochtert, C. Geiger
We present the design and implementation of a prototype consisting of several mobile devices that allow for multi user exploration of dynamically visualised graphs of large data sets in a mixed reality environment. Tablet devices are used to represent nodes and placed on a table as augmented reality tracking targets. From these nodes a graph is dynamically loaded and visualised in mixed reality space. Multiple users can interact with the graph through further mobile devices acting as magic lenses. We explore different interaction methods for basic graph exploration tasks based on the previous research in interactive graph exploration.
我们展示了一个原型的设计和实现,该原型由几个移动设备组成,允许多用户在混合现实环境中探索大数据集的动态可视化图形。平板设备用于表示节点,并放置在桌子上作为增强现实跟踪目标。从这些节点动态加载图形并在混合现实空间中可视化。多个用户可以通过充当神奇镜头的其他移动设备与图表进行交互。在前人交互式图探索研究的基础上,我们探索了基本图探索任务的不同交互方法。
{"title":"Collaborative magic lens graph exploration","authors":"Daniel Drochtert, C. Geiger","doi":"10.1145/2818427.2818465","DOIUrl":"https://doi.org/10.1145/2818427.2818465","url":null,"abstract":"We present the design and implementation of a prototype consisting of several mobile devices that allow for multi user exploration of dynamically visualised graphs of large data sets in a mixed reality environment. Tablet devices are used to represent nodes and placed on a table as augmented reality tracking targets. From these nodes a graph is dynamically loaded and visualised in mixed reality space. Multiple users can interact with the graph through further mobile devices acting as magic lenses. We explore different interaction methods for basic graph exploration tasks based on the previous research in interactive graph exploration.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132177537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Interactive animated mobile information visualisation 交互式动画移动信息可视化
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818458
Paul Craig
While the potential of mobile information visualisation is widely recognized, there is still relatively little research in this area and few practical guidelines for the design of mobile information visualisation interfaces. Indeed, it would appear that there is still a general feeling in the interface design community that mobile visualisation should be limited to simple operations and small scale data. Information visualisation research has concentrated thus far on desktop PCs and larger displays while interfaces for more compact mobile device have been neglected. This is in spite of the increasing popularity and widespread use of smart-phones and other new mobile technologies. In this paper we address this issue by developing a set of low-level interface design guidelines for mobile information visualisation development. This is done by considering a basic set of interactions and relating these to mobile device limitations. Our results suggest that the mindful application of existing information visualisation techniques can overcome many mobile device limitations and that proper implementation of interaction mechanisms and animated view transitions are key to effective mobile information visualisation. This is illustrated with case studies looking at a coordinated map and timeline interface for geo-temporal data, a distorted scatter-plot, and a space filling hierarchy view.
虽然移动信息可视化的潜力得到了广泛的认可,但在这一领域的研究仍然相对较少,并且很少有实用的指导方针来设计移动信息可视化界面。事实上,在界面设计社区中,似乎仍然有一种普遍的感觉,即移动可视化应该仅限于简单的操作和小规模的数据。到目前为止,信息可视化研究主要集中在台式电脑和更大的显示器上,而更紧凑的移动设备的界面却被忽视了。这是在智能手机和其他新的移动技术日益普及和广泛使用的情况下发生的。在本文中,我们通过开发一套用于移动信息可视化开发的低级界面设计指南来解决这个问题。这是通过考虑一组基本的交互并将其与移动设备的限制联系起来实现的。我们的研究结果表明,有意识地应用现有的信息可视化技术可以克服许多移动设备的限制,并且正确实施交互机制和动画视图转换是有效移动信息可视化的关键。通过案例研究来说明这一点,这些案例研究着眼于地理时间数据的协调地图和时间轴接口、扭曲的散点图和填充空间的层次结构视图。
{"title":"Interactive animated mobile information visualisation","authors":"Paul Craig","doi":"10.1145/2818427.2818458","DOIUrl":"https://doi.org/10.1145/2818427.2818458","url":null,"abstract":"While the potential of mobile information visualisation is widely recognized, there is still relatively little research in this area and few practical guidelines for the design of mobile information visualisation interfaces. Indeed, it would appear that there is still a general feeling in the interface design community that mobile visualisation should be limited to simple operations and small scale data. Information visualisation research has concentrated thus far on desktop PCs and larger displays while interfaces for more compact mobile device have been neglected. This is in spite of the increasing popularity and widespread use of smart-phones and other new mobile technologies. In this paper we address this issue by developing a set of low-level interface design guidelines for mobile information visualisation development. This is done by considering a basic set of interactions and relating these to mobile device limitations. Our results suggest that the mindful application of existing information visualisation techniques can overcome many mobile device limitations and that proper implementation of interaction mechanisms and animated view transitions are key to effective mobile information visualisation. This is illustrated with case studies looking at a coordinated map and timeline interface for geo-temporal data, a distorted scatter-plot, and a space filling hierarchy view.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114235071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
MovieTile: interactively adjustable free shape multi-display of mobile devices MovieTile:移动设备交互式可调自由形状多显示
Pub Date : 2015-11-02 DOI: 10.1145/2818427.2818436
Takashige Ohta, Jun Tanaka
We developed MovieTile, a system for constructing multi-display environments using multiple mobile devices. The system offers a simple and intuitive interface for configuring a display arrangement. It enables the use of devices of different screen sizes mixed in forming a single virtual screen of free shape. The system delivers a movie file to all devices, then the movie is played so that it fills the entire screen. We expect that this system can offer opportunities to use free shape screens much more easily. The system consists of two applications: one for a controller and the other for screen devices. This report describes the design and mechanism of the configuration interface and systems.
我们开发了MovieTile,这是一个使用多种移动设备构建多显示环境的系统。该系统为配置显示排列提供了一个简单直观的界面。它允许不同屏幕尺寸的设备混合使用,形成一个自由形状的单一虚拟屏幕。系统将电影文件传送到所有设备,然后播放电影,使其充满整个屏幕。我们期望这个系统可以提供机会,更容易地使用自由形状的屏幕。该系统由两个应用程序组成:一个用于控制器,另一个用于屏幕设备。本报告描述了配置接口和系统的设计和机制。
{"title":"MovieTile: interactively adjustable free shape multi-display of mobile devices","authors":"Takashige Ohta, Jun Tanaka","doi":"10.1145/2818427.2818436","DOIUrl":"https://doi.org/10.1145/2818427.2818436","url":null,"abstract":"We developed MovieTile, a system for constructing multi-display environments using multiple mobile devices. The system offers a simple and intuitive interface for configuring a display arrangement. It enables the use of devices of different screen sizes mixed in forming a single virtual screen of free shape. The system delivers a movie file to all devices, then the movie is played so that it fills the entire screen. We expect that this system can offer opportunities to use free shape screens much more easily. The system consists of two applications: one for a controller and the other for screen devices. This report describes the design and mechanism of the configuration interface and systems.","PeriodicalId":328982,"journal":{"name":"SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1