首页 > 最新文献

Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
FlexStroke: a jamming brush tip simulating multiple painting tools on digital platform FlexStroke:在数字平台上模拟多种绘画工具的干扰笔头
Xin Liu, Haijun Xia, J. Gu
We propose a new system to enable the real painting experience on digital platform and extend it to multi-strokes for different painting needs. In this paper, we describe how the FlexStroke is used as Chinese brush, oil brush and crayon with changes of its jamming tip. This tip has different levels of stiffness based on its jamming structure. Visual simulations on PixelSense jointly enhance the intuitive painting process with highly realistic display results.
我们提出了一种新的系统,可以在数字平台上实现真实的绘画体验,并将其扩展到多种笔触,以满足不同的绘画需求。本文介绍了FlexStroke在笔刷、油笔、蜡笔等方面的应用,以及其卡头的变化。这种尖端根据其干扰结构具有不同的刚度水平。PixelSense上的视觉模拟共同增强了直观的绘画过程和高度逼真的显示结果。
{"title":"FlexStroke: a jamming brush tip simulating multiple painting tools on digital platform","authors":"Xin Liu, Haijun Xia, J. Gu","doi":"10.1145/2508468.2514935","DOIUrl":"https://doi.org/10.1145/2508468.2514935","url":null,"abstract":"We propose a new system to enable the real painting experience on digital platform and extend it to multi-strokes for different painting needs. In this paper, we describe how the FlexStroke is used as Chinese brush, oil brush and crayon with changes of its jamming tip. This tip has different levels of stiffness based on its jamming structure. Visual simulations on PixelSense jointly enhance the intuitive painting process with highly realistic display results.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125921622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
FingerSkate: making multi-touch operations less constrained and more continuous FingerSkate:使多点触控操作更少约束,更连续
Jeongmin Son, Geehyuk Lee
Multi-touch operations are sometimes difficult to perform due to musculoskeletal constraints. We propose FingerSkate, a variation to the current multi-touch operations to make them less constrained and more continuous. With FingerSkate, once one starts a multi-touch operation, one can continue the operation without having to maintain both fingers on the screen. In a pilot study, we observe that participants could learn to FingerSkate easily and were utilizing the new technique actively.
由于肌肉骨骼的限制,多点触控操作有时很难执行。我们提出了FingerSkate,这是当前多点触控操作的一种变化,使它们更少的约束和更连续。有了FingerSkate,一旦开始多点触控操作,就可以继续操作,而不必把两个手指都放在屏幕上。在一项初步研究中,我们观察到参与者可以轻松地学习手指滑冰,并且积极地利用新技术。
{"title":"FingerSkate: making multi-touch operations less constrained and more continuous","authors":"Jeongmin Son, Geehyuk Lee","doi":"10.1145/2508468.2514733","DOIUrl":"https://doi.org/10.1145/2508468.2514733","url":null,"abstract":"Multi-touch operations are sometimes difficult to perform due to musculoskeletal constraints. We propose FingerSkate, a variation to the current multi-touch operations to make them less constrained and more continuous. With FingerSkate, once one starts a multi-touch operation, one can continue the operation without having to maintain both fingers on the screen. In a pilot study, we observe that participants could learn to FingerSkate easily and were utilizing the new technique actively.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130319041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Augmenting braille input through multitouch feedback 通过多点触控反馈增强盲文输入
Hugo Nicolau, Kyle Montague, J. Guerreiro, Diogo Marques, Tiago Guerreiro, Craig D. Stewart, Vicki L. Hanson
Current touch interfaces lack the rich tactile feedback that allows blind users to detect and correct errors. This is especially relevant for multitouch interactions, such as Braille input. We propose HoliBraille, a system that combines touch input and multi-point vibrotactile output on mobile devices. We believe this technology can offer several benefits to blind users; namely, convey feedback for complex multitouch gestures, improve input performance, and support inconspicuous interactions. In this paper, we present the design of our unique prototype, which allows users to receive multitouch localized vibrotactile feedback. Preliminary results on perceptual discrimination show an average of 100% and 82% accuracy for single-point and chord discrimination, respectively. Finally, we discuss a text-entry application with rich tactile feedback.
目前的触摸界面缺乏丰富的触觉反馈,使盲人用户能够检测和纠正错误。这对于多点触控交互尤其重要,比如盲文输入。我们提出HoliBraille,一种在移动设备上结合触摸输入和多点振动触觉输出的系统。我们相信这项技术可以为盲人用户带来一些好处;也就是说,为复杂的多点触控手势传达反馈,提高输入性能,并支持不显眼的交互。在本文中,我们展示了我们独特的原型设计,它允许用户接收多点触摸局部振动触觉反馈。知觉辨别的初步结果显示,单点和和弦辨别的平均准确率分别为100%和82%。最后,我们讨论了一个具有丰富触觉反馈的文本输入应用。
{"title":"Augmenting braille input through multitouch feedback","authors":"Hugo Nicolau, Kyle Montague, J. Guerreiro, Diogo Marques, Tiago Guerreiro, Craig D. Stewart, Vicki L. Hanson","doi":"10.1145/2508468.2514720","DOIUrl":"https://doi.org/10.1145/2508468.2514720","url":null,"abstract":"Current touch interfaces lack the rich tactile feedback that allows blind users to detect and correct errors. This is especially relevant for multitouch interactions, such as Braille input. We propose HoliBraille, a system that combines touch input and multi-point vibrotactile output on mobile devices. We believe this technology can offer several benefits to blind users; namely, convey feedback for complex multitouch gestures, improve input performance, and support inconspicuous interactions. In this paper, we present the design of our unique prototype, which allows users to receive multitouch localized vibrotactile feedback. Preliminary results on perceptual discrimination show an average of 100% and 82% accuracy for single-point and chord discrimination, respectively. Finally, we discuss a text-entry application with rich tactile feedback.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126273067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Pixel-based reverse engineering of graphical interfaces 基于像素的图形界面逆向工程
M. Dixon
My dissertation proposes a vision in which anybody can modify any interface of any application. Realizing this vision is difficult because of the rigidity and fragmentation of current interfaces. Specifically, rigidity makes it difficult or impossible for a designer to modify or customize existing interfaces. Fragmentation results from the fact that people generally use many different applications built with a variety of toolkits. Each is implemented differently, so it is difficult to consistently add new functionality. As a result, researchers are often limited to demonstrating new ideas in small testbeds, and practitioners often find it difficult to adopt and deploy ideas from the literature. In my dissertation, I propose transcending the rigidity and fragmentation of modern interfaces by building upon their single largest commonality: that they ultimately consist of pixels painted to a display. Building from this universal representation, I propose pixel-based interpretation to enable modification of interfaces without their source code and independent of their underlying toolkit implementation.
我的论文提出了一个任何人都可以修改任何应用程序的任何接口的愿景。由于当前接口的刚性和碎片化,实现这一愿景是困难的。具体来说,刚性使得设计人员很难或不可能修改或定制现有的接口。碎片化是由于人们通常使用由各种工具包构建的许多不同的应用程序。它们的实现方式各不相同,因此很难始终如一地添加新功能。因此,研究人员经常被限制在小的测试平台上展示新的想法,而实践者经常发现很难从文献中采纳和部署想法。在我的论文中,我建议通过建立它们最大的共性来超越现代界面的刚性和碎片化:它们最终由绘制到显示器上的像素组成。基于这种通用表示,我提出了基于像素的解释,以便在没有源代码的情况下修改接口,并且独立于其底层工具包实现。
{"title":"Pixel-based reverse engineering of graphical interfaces","authors":"M. Dixon","doi":"10.1145/2508468.2508469","DOIUrl":"https://doi.org/10.1145/2508468.2508469","url":null,"abstract":"My dissertation proposes a vision in which anybody can modify any interface of any application. Realizing this vision is difficult because of the rigidity and fragmentation of current interfaces. Specifically, rigidity makes it difficult or impossible for a designer to modify or customize existing interfaces. Fragmentation results from the fact that people generally use many different applications built with a variety of toolkits. Each is implemented differently, so it is difficult to consistently add new functionality. As a result, researchers are often limited to demonstrating new ideas in small testbeds, and practitioners often find it difficult to adopt and deploy ideas from the literature. In my dissertation, I propose transcending the rigidity and fragmentation of modern interfaces by building upon their single largest commonality: that they ultimately consist of pixels painted to a display. Building from this universal representation, I propose pixel-based interpretation to enable modification of interfaces without their source code and independent of their underlying toolkit implementation.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128468755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An assembly of soft actuators for an organic user interface 用于有机用户界面的软执行器组件
Yoshiharu Ooide, Hiroki Kawaguchi, T. Nojima
An organic user interface (OUI) is a kind of interface that is based on natural human-human and human-physical object interaction models. In such situations, hair and fur play important roles in establishing smooth and natural communication. Animals and birds use their hair, fur and feathers to express their emotions, and groom each other when forming closer relationships. Therefore, hair and fur are potential materials for development of the ideal OUI. In this research, we propose the hairlytop interface, which is a collection of hair-like units composed of shape memory alloys, for use as an OUI. The proposed interface is capable of improving its spatial resolution and can be used to develop a hair surface on any electrical device shape.
有机用户界面(OUI)是一种基于自然的人与人、人与物理对象交互模型的界面。在这种情况下,毛发和皮毛在建立顺畅自然的交流中起着重要作用。动物和鸟类用它们的毛发和羽毛来表达它们的情感,在形成更亲密的关系时互相梳理。因此,毛发和皮毛是开发理想OUI的潜在材料。在这项研究中,我们提出了一种毛发状界面,它是由形状记忆合金组成的毛发状单元的集合,用作OUI。所提出的界面能够提高其空间分辨率,并可用于在任何电气设备形状上开发毛发表面。
{"title":"An assembly of soft actuators for an organic user interface","authors":"Yoshiharu Ooide, Hiroki Kawaguchi, T. Nojima","doi":"10.1145/2508468.2514723","DOIUrl":"https://doi.org/10.1145/2508468.2514723","url":null,"abstract":"An organic user interface (OUI) is a kind of interface that is based on natural human-human and human-physical object interaction models. In such situations, hair and fur play important roles in establishing smooth and natural communication. Animals and birds use their hair, fur and feathers to express their emotions, and groom each other when forming closer relationships. Therefore, hair and fur are potential materials for development of the ideal OUI. In this research, we propose the hairlytop interface, which is a collection of hair-like units composed of shape memory alloys, for use as an OUI. The proposed interface is capable of improving its spatial resolution and can be used to develop a hair surface on any electrical device shape.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130583164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Integrated visual representations for programming with real-world input and output 集成可视化表示与现实世界的输入和输出编程
Jun Kato
As computers become more pervasive, more programs deal with real-world input and output (real-world I/O) such as processing camera images and controlling robots. The real-world I/O usually contains complex data hardly represented by text or symbols, while most of the current integrated development environments (IDEs) are equipped with text-based editors and debuggers. My thesis investigates how visual representations of the real world can be integrated within the text-based development environment to enhance the programming experience. In particular, we have designed and implemented IDEs for three scenarios, all of which make use of photos and videos representing the real world. Based on these experiences, we discuss "programming with example data," a technique where the programmer demonstrates examples to the IDE and writes text-based code with support of the examples.
随着计算机变得越来越普及,越来越多的程序处理现实世界的输入和输出(现实世界的I/O),例如处理相机图像和控制机器人。现实世界的I/O通常包含难以用文本或符号表示的复杂数据,而当前的大多数集成开发环境(ide)都配备了基于文本的编辑器和调试器。我的论文研究了如何将现实世界的可视化表示集成到基于文本的开发环境中,以增强编程体验。特别地,我们为三个场景设计并实现了ide,它们都使用了代表真实世界的照片和视频。基于这些经验,我们讨论“使用示例数据编程”,这是一种程序员向IDE演示示例并在示例的支持下编写基于文本的代码的技术。
{"title":"Integrated visual representations for programming with real-world input and output","authors":"Jun Kato","doi":"10.1145/2508468.2508476","DOIUrl":"https://doi.org/10.1145/2508468.2508476","url":null,"abstract":"As computers become more pervasive, more programs deal with real-world input and output (real-world I/O) such as processing camera images and controlling robots. The real-world I/O usually contains complex data hardly represented by text or symbols, while most of the current integrated development environments (IDEs) are equipped with text-based editors and debuggers. My thesis investigates how visual representations of the real world can be integrated within the text-based development environment to enhance the programming experience. In particular, we have designed and implemented IDEs for three scenarios, all of which make use of photos and videos representing the real world. Based on these experiences, we discuss \"programming with example data,\" a technique where the programmer demonstrates examples to the IDE and writes text-based code with support of the examples.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129193923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multi-touch gesture recognition by single photoreflector 多点触控手势识别单光反射
H. Manabe
A simple technique is proposed that uses a single photoreflector to recognize multi-touch gestures. Touch and multi-finger swipe are robustly discriminated and recognized. Further, swipe direction can be detected by adding a gradient to the sensitivity.
提出了一种简单的技术,使用单个光反射器来识别多点触摸手势。触摸和多指滑动是稳健的区分和识别。此外,可以通过向灵敏度添加梯度来检测滑动方向。
{"title":"Multi-touch gesture recognition by single photoreflector","authors":"H. Manabe","doi":"10.1145/2508468.2514933","DOIUrl":"https://doi.org/10.1145/2508468.2514933","url":null,"abstract":"A simple technique is proposed that uses a single photoreflector to recognize multi-touch gestures. Touch and multi-finger swipe are robustly discriminated and recognized. Further, swipe direction can be detected by adding a gradient to the sensitivity.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121479468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Glassified: an augmented ruler based on a transparent display for real-time interactions with paper 玻璃化:一种基于透明显示器的增强型尺子,可与纸张进行实时交互
Anirudh Sharma, Lirong Liu, P. Maes
We introduce Glassified, a modified ruler with a transparent display to supplement physical strokes made on paper with virtual graphics. Because the display is transparent, both the physical strokes and the virtual graphics are visible in the same plane. A digitizer captures the pen strokes in order to update the graphical overlay, fusing the traditional function of a ruler with the added advantages of a digital, display-based system. We describe use-cases of Glassified in the areas of math and physics and discuss its advantages over traditional systems.
我们介绍了glass - fied,这是一种带有透明显示器的改进尺子,可以用虚拟图形来补充纸上的物理笔画。因为显示器是透明的,所以物理笔画和虚拟图形在同一平面上都是可见的。数字化仪捕获笔划,以更新图形覆盖,融合了传统的尺子功能和基于数字显示的系统的附加优势。我们描述了玻璃分类在数学和物理领域的用例,并讨论了它相对于传统系统的优势。
{"title":"Glassified: an augmented ruler based on a transparent display for real-time interactions with paper","authors":"Anirudh Sharma, Lirong Liu, P. Maes","doi":"10.1145/2508468.2514937","DOIUrl":"https://doi.org/10.1145/2508468.2514937","url":null,"abstract":"We introduce Glassified, a modified ruler with a transparent display to supplement physical strokes made on paper with virtual graphics. Because the display is transparent, both the physical strokes and the virtual graphics are visible in the same plane. A digitizer captures the pen strokes in order to update the graphical overlay, fusing the traditional function of a ruler with the added advantages of a digital, display-based system. We describe use-cases of Glassified in the areas of math and physics and discuss its advantages over traditional systems.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116113215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Physink: sketching physical behavior 物理学:描绘物理行为
J. Scott, Randall Davis
Describing device behavior is a common task that is currently not well supported by general animation or CAD software. We present PhysInk, a system that enables users to demonstrate 2D behavior by sketching and directly manipulating objects on a physics-enabled stage. Unlike previous tools that simply capture the user's animation, PhysInk captures an understanding of the behavior in a timeline. This enables useful capabilities such as causality-aware editing and finding physically-correct equivalent behavior. We envision PhysInk being used as a physics teacher's sketchpad or a WYSIWYG tool for game designers.
描述设备行为是一项常见的任务,目前一般的动画或CAD软件都不支持。我们介绍了PhysInk,这是一个系统,使用户能够通过在物理启用的舞台上绘制草图和直接操作对象来演示2D行为。不像以前的工具只是简单地捕捉用户的动画,PhysInk捕捉在时间轴上的行为的理解。这就实现了一些有用的功能,比如因果关系感知编辑和寻找物理正确的等效行为。我们设想PhysInk可以被用作物理老师的画板或游戏设计师的所见即所得工具。
{"title":"Physink: sketching physical behavior","authors":"J. Scott, Randall Davis","doi":"10.1145/2508468.2514930","DOIUrl":"https://doi.org/10.1145/2508468.2514930","url":null,"abstract":"Describing device behavior is a common task that is currently not well supported by general animation or CAD software. We present PhysInk, a system that enables users to demonstrate 2D behavior by sketching and directly manipulating objects on a physics-enabled stage. Unlike previous tools that simply capture the user's animation, PhysInk captures an understanding of the behavior in a timeline. This enables useful capabilities such as causality-aware editing and finding physically-correct equivalent behavior. We envision PhysInk being used as a physics teacher's sketchpad or a WYSIWYG tool for game designers.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131477765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Multi-perspective multi-layer interaction on mobile device 移动设备上的多角度多层次交互
M. Khademi, Mingming Fan, Hossein Mousavi Hondori, C. Lopes
We propose a novel multi-perspective multi-layer interaction using a mobile device, which provides an immersive experience of 3D navigation through an object. The mobile device serves as a window, through which the user can observe the object in detail from various perspectives by orienting the device differently. Various layers of the object can also be shown while users move the device away and toward themselves. Our approach is real-time, completely mobile (running on Android) and does not depend on external sensor/displays (e.g., camera and projector).
我们提出了一种使用移动设备的新颖的多视角多层交互,它提供了通过物体进行3D导航的沉浸式体验。移动设备是一个窗口,用户可以通过移动设备的不同方向,从不同的角度详细观察物体。当用户将设备移开或移向自己时,物体的不同层次也可以显示出来。我们的方法是实时的,完全移动的(在Android上运行),不依赖于外部传感器/显示器(例如,相机和投影仪)。
{"title":"Multi-perspective multi-layer interaction on mobile device","authors":"M. Khademi, Mingming Fan, Hossein Mousavi Hondori, C. Lopes","doi":"10.1145/2508468.2514712","DOIUrl":"https://doi.org/10.1145/2508468.2514712","url":null,"abstract":"We propose a novel multi-perspective multi-layer interaction using a mobile device, which provides an immersive experience of 3D navigation through an object. The mobile device serves as a window, through which the user can observe the object in detail from various perspectives by orienting the device differently. Various layers of the object can also be shown while users move the device away and toward themselves. Our approach is real-time, completely mobile (running on Android) and does not depend on external sensor/displays (e.g., camera and projector).","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121878280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1