首页 > 最新文献

Proceedings of the 2nd ACM symposium on Spatial user interaction最新文献

英文 中文
T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation T(以太):用于多用户3D建模和动画的空间感知手持设备,手势和本体感觉
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659785
David Lakatos, M. Blackshaw, A. Olwal, Zachary Barryte, K. Perlin, H. Ishii
T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.
T(ether)是一个空间感知显示系统,用于多用户、协同操作和虚拟三维物体的动画。手持显示器充当虚拟现实的窗口,为用户提供3D数据的透视视图。除了手持触摸屏外,T(ether)还可以跟踪用户的头部、手部、手指和捏捏动作,从而实现与虚拟场景的丰富互动。我们介绍了利用本体感觉的手势交互技术,根据手在显示器上方、后方或表面的位置来调整UI。除了控制环境属性外,这些空间交互使用有形的参考框架来帮助用户操纵和动画模型。我们报告了3D建模实验的初始用户观察结果,这表明T(ether)在嵌入视口控制和3D建模交互方面的潜力。
{"title":"T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation","authors":"David Lakatos, M. Blackshaw, A. Olwal, Zachary Barryte, K. Perlin, H. Ishii","doi":"10.1145/2659766.2659785","DOIUrl":"https://doi.org/10.1145/2659766.2659785","url":null,"abstract":"T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128656343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
A self-experimentation report about long-term use of fully-immersive technology 一份关于长期使用完全沉浸式技术的自我实验报告
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659767
Frank Steinicke, G. Bruder
Virtual and digital worlds have become an essential part of our daily life, and many activities that we used to perform in the real world such as communication, e-commerce, or games, have been transferred to the virtual world nowadays. This transition has been addressed many times by science fic- tion literature and cinematographic works, which often show dystopic visions in which humans live their lives in a virtual reality (VR)-based setup, while they are immersed into a vir- tual or remote location by means of avatars or surrogates. In order to gain a better understanding of how living in such a virtual environment (VE) would impact human beings, we conducted a self-experiment in which we exposed a single participant in an immersive VR setup for 24 hours (divided into repeated sessions of two hours VR exposure followed by ten minutes breaks), which is to our knowledge the longest documented use of an immersive VEs so far. We measured different metrics to analyze how human perception, behav- ior, cognition, and motor system change over time in a fully isolated virtual world.
虚拟世界和数字世界已经成为我们日常生活中必不可少的一部分,我们过去在现实世界中进行的许多活动,如通信、电子商务、游戏等,如今都转移到了虚拟世界中。科幻文学和电影作品多次提到了这种转变,它们经常展示反乌托邦的愿景,即人类在虚拟现实(VR)的基础上生活,同时通过化身或替身沉浸在虚拟或偏远的地方。为了更好地了解生活在这样一个虚拟环境(VE)中会如何影响人类,我们进行了一项自我实验,我们将一个参与者暴露在沉浸式VR设置中24小时(分为重复的两个小时的VR暴露,然后休息十分钟),据我们所知,这是迄今为止记录的最长的沉浸式VR使用时间。我们测量了不同的指标来分析人类的感知、行为、认知和运动系统如何在一个完全孤立的虚拟世界中随时间变化。
{"title":"A self-experimentation report about long-term use of fully-immersive technology","authors":"Frank Steinicke, G. Bruder","doi":"10.1145/2659766.2659767","DOIUrl":"https://doi.org/10.1145/2659766.2659767","url":null,"abstract":"Virtual and digital worlds have become an essential part of our daily life, and many activities that we used to perform in the real world such as communication, e-commerce, or games, have been transferred to the virtual world nowadays. This transition has been addressed many times by science fic- tion literature and cinematographic works, which often show dystopic visions in which humans live their lives in a virtual reality (VR)-based setup, while they are immersed into a vir- tual or remote location by means of avatars or surrogates. In order to gain a better understanding of how living in such a virtual environment (VE) would impact human beings, we conducted a self-experiment in which we exposed a single participant in an immersive VR setup for 24 hours (divided into repeated sessions of two hours VR exposure followed by ten minutes breaks), which is to our knowledge the longest documented use of an immersive VEs so far. We measured different metrics to analyze how human perception, behav- ior, cognition, and motor system change over time in a fully isolated virtual world.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127127728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Object-based touch manipulation for remote guidance of physical tasks 用于物理任务远程指导的基于对象的触摸操作
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659768
Matt Adcock, Dulitha Ranatunga, Ross T. Smith, B. Thomas
This paper presents a spatial multi-touch system for the remote guidance of physical tasks that uses semantic information about the physical properties of the environment. It enables a remote expert to observe a video feed of the local worker's environment and directly specify object movements via a touch display. Visual feedback for the gestures is displayed directly in the local worker's physical environment with Spatial Augmented Reality and observed by the remote expert through the video feed. A virtual representation of the physical environment is captured with a Kinect that facilitates the context-based interactions. We evaluate two methods of remote worker interaction, object-based and sketch-based, and also investigate the impact of two camera positions, top and side, for task performance. Our results indicate translation and aggregate tasks could be more accurately performed via the object based technique when the top-down camera feed was used. While, in the case of the side on camera view, sketching was faster and rotations were more accurate. We also found that for object-based interactions the top view was better on all four of our measured criteria, while for sketching no significant difference was found between camera views.
本文提出了一种空间多点触摸系统,用于物理任务的远程指导,该系统使用有关环境物理特性的语义信息。它使远程专家能够观察当地工人环境的视频馈送,并通过触摸显示器直接指定物体的运动。手势的视觉反馈通过空间增强现实技术直接显示在当地工作人员的物理环境中,并由远程专家通过视频馈送观察。通过Kinect捕捉物理环境的虚拟表示,从而促进基于上下文的交互。我们评估了远程工作者交互的两种方法,基于对象和基于草图,并研究了两个摄像机位置(顶部和侧面)对任务性能的影响。我们的研究结果表明,当使用自上而下的摄像机馈送时,通过基于对象的技术可以更准确地执行翻译和聚合任务。然而,在摄像机视图的侧面情况下,素描更快,旋转更准确。我们还发现,对于基于对象的交互,俯视图在所有四个测量标准上都更好,而对于素描,在相机视图之间没有发现显着差异。
{"title":"Object-based touch manipulation for remote guidance of physical tasks","authors":"Matt Adcock, Dulitha Ranatunga, Ross T. Smith, B. Thomas","doi":"10.1145/2659766.2659768","DOIUrl":"https://doi.org/10.1145/2659766.2659768","url":null,"abstract":"This paper presents a spatial multi-touch system for the remote guidance of physical tasks that uses semantic information about the physical properties of the environment. It enables a remote expert to observe a video feed of the local worker's environment and directly specify object movements via a touch display. Visual feedback for the gestures is displayed directly in the local worker's physical environment with Spatial Augmented Reality and observed by the remote expert through the video feed. A virtual representation of the physical environment is captured with a Kinect that facilitates the context-based interactions. We evaluate two methods of remote worker interaction, object-based and sketch-based, and also investigate the impact of two camera positions, top and side, for task performance. Our results indicate translation and aggregate tasks could be more accurately performed via the object based technique when the top-down camera feed was used. While, in the case of the side on camera view, sketching was faster and rotations were more accurate. We also found that for object-based interactions the top view was better on all four of our measured criteria, while for sketching no significant difference was found between camera views.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129690980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Simulator for developing gaze sensitive environment using corneal reflection-based remote gaze tracker 利用基于角膜反射的远程注视跟踪器开发注视敏感环境的模拟器
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661207
Takashi Nagamatsu, Michiya Yamamoto, G. Rigoll
We describe a simulator for developing a gaze sensitive environment using a corneal reflection-based remote gaze tracker. The simulator can arrange cameras and IR-LEDs in 3D to check the measuring range to suit the target volume prior to implementation. We applied it to a museum showcase and a car.
我们描述了一个模拟器,用于开发一个使用基于角膜反射的远程凝视跟踪器的凝视敏感环境。模拟器可以在3D中安排相机和ir - led,以在实施之前检查测量范围以适应目标体积。我们把它应用到一个博物馆的展示和一辆汽车上。
{"title":"Simulator for developing gaze sensitive environment using corneal reflection-based remote gaze tracker","authors":"Takashi Nagamatsu, Michiya Yamamoto, G. Rigoll","doi":"10.1145/2659766.2661207","DOIUrl":"https://doi.org/10.1145/2659766.2661207","url":null,"abstract":"We describe a simulator for developing a gaze sensitive environment using a corneal reflection-based remote gaze tracker. The simulator can arrange cameras and IR-LEDs in 3D to check the measuring range to suit the target volume prior to implementation. We applied it to a museum showcase and a car.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130761500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Emotional space: understanding affective spatial dimensions of constructed embodied shapes 情感空间:理解所构建的具身形状的情感空间维度
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661208
Edward F. Melcer, K. Isbister
We build upon recent research designing a constructive, multi-touch emotional assessment tool and present preliminary qualitative results from a Wizard of Oz study simulating the tool with clay. Our results showed the importance of emotionally contextualized spatial orientations, manipulations, and interactions of real world objects in the constructive process, and led to the identification of two new affective dimensions for the tool.
我们以最近的研究为基础,设计了一个建设性的,多点触摸的情感评估工具,并从绿野仙踪研究中用粘土模拟该工具中提出了初步的定性结果。我们的研究结果表明,在构建过程中,情感情境化的空间方向、操作和现实世界物体的相互作用非常重要,并导致了该工具的两个新的情感维度的识别。
{"title":"Emotional space: understanding affective spatial dimensions of constructed embodied shapes","authors":"Edward F. Melcer, K. Isbister","doi":"10.1145/2659766.2661208","DOIUrl":"https://doi.org/10.1145/2659766.2661208","url":null,"abstract":"We build upon recent research designing a constructive, multi-touch emotional assessment tool and present preliminary qualitative results from a Wizard of Oz study simulating the tool with clay. Our results showed the importance of emotionally contextualized spatial orientations, manipulations, and interactions of real world objects in the constructive process, and led to the identification of two new affective dimensions for the tool.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125504801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Session details: Seeing, walking and being in spatial VEs 会话细节:观看、行走和处于空间ve中
Pub Date : 2014-10-04 DOI: 10.1145/3247433
Steven K. Feiner
{"title":"Session details: Seeing, walking and being in spatial VEs","authors":"Steven K. Feiner","doi":"10.1145/3247433","DOIUrl":"https://doi.org/10.1145/3247433","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"29 24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128999615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GestureAnalyzer: visual analytics for pattern analysis of mid-air hand gestures 用于空中手势模式分析的可视化分析工具
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659772
Sujin Jang, N. Elmqvist, K. Ramani
Understanding the intent behind human gestures is a critical problem in the design of gestural interactions. A common method to observe and understand how users express gestures is to use elicitation studies. However, these studies require time-consuming analysis of user data to identify gesture patterns. Also, the analysis by humans cannot describe gestures in as detail as in data-based representations of motion features. In this paper, we present GestureAnalyzer, a system that supports exploratory analysis of gesture patterns by applying interactive clustering and visualization techniques to motion tracking data. GestureAnalyzer enables rapid categorization of similar gestures, and visual investigation of various geometric and kinematic properties of user gestures. We describe the system components, and then demonstrate its utility through a case study on mid-air hand gestures obtained from elicitation studies.
理解人类手势背后的意图是手势交互设计中的一个关键问题。观察和理解用户如何表达手势的一种常用方法是使用启发式研究。然而,这些研究需要对用户数据进行耗时的分析,以识别手势模式。此外,人类的分析不能像基于数据的运动特征表示那样详细地描述手势。在本文中,我们介绍了GestureAnalyzer,这是一个通过对运动跟踪数据应用交互式聚类和可视化技术来支持手势模式探索性分析的系统。GestureAnalyzer可以对类似的手势进行快速分类,并对用户手势的各种几何和运动学属性进行视觉调查。我们描述了系统的组成部分,然后通过一个从启发研究中获得的空中手势的案例研究来展示它的实用性。
{"title":"GestureAnalyzer: visual analytics for pattern analysis of mid-air hand gestures","authors":"Sujin Jang, N. Elmqvist, K. Ramani","doi":"10.1145/2659766.2659772","DOIUrl":"https://doi.org/10.1145/2659766.2659772","url":null,"abstract":"Understanding the intent behind human gestures is a critical problem in the design of gestural interactions. A common method to observe and understand how users express gestures is to use elicitation studies. However, these studies require time-consuming analysis of user data to identify gesture patterns. Also, the analysis by humans cannot describe gestures in as detail as in data-based representations of motion features. In this paper, we present GestureAnalyzer, a system that supports exploratory analysis of gesture patterns by applying interactive clustering and visualization techniques to motion tracking data. GestureAnalyzer enables rapid categorization of similar gestures, and visual investigation of various geometric and kinematic properties of user gestures. We describe the system components, and then demonstrate its utility through a case study on mid-air hand gestures obtained from elicitation studies.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127168148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Visual aids in 3D point selection experiments 视觉辅助3D点选择实验
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659770
Robert J. Teather, W. Stuerzlinger
We present a study investigating the influence of visual aids on 3D point selection tasks. In a Fitts' law pointing experiment, we compared the effects of texturing, highlighting targets upon being touched, and the presence of support cylinders intended to eliminate floating targets. Results of the study indicate that texturing and support cylinders did not significantly influence performance. Enabling target highlighting increased movement speed, while decreasing error rate. Pointing throughput was unaffected by this speed-accuracy tradeoff. Highlighting also eliminated significant differences between selection coordinate depth deviation and the deviation in the two orthogonal axes.
我们提出了一项研究,调查视觉辅助对三维点选择任务的影响。在菲茨定律指向实验中,我们比较了纹理的效果,在触摸时突出目标,以及存在旨在消除浮动目标的支撑圆柱体。研究结果表明,变形和支撑圆柱体对性能影响不显著。启用目标高亮可以提高移动速度,同时降低错误率。指向吞吐量不受这种速度-精度折衷的影响。高亮还消除了选择坐标深度偏差和两个正交轴偏差之间的显著差异。
{"title":"Visual aids in 3D point selection experiments","authors":"Robert J. Teather, W. Stuerzlinger","doi":"10.1145/2659766.2659770","DOIUrl":"https://doi.org/10.1145/2659766.2659770","url":null,"abstract":"We present a study investigating the influence of visual aids on 3D point selection tasks. In a Fitts' law pointing experiment, we compared the effects of texturing, highlighting targets upon being touched, and the presence of support cylinders intended to eliminate floating targets. Results of the study indicate that texturing and support cylinders did not significantly influence performance. Enabling target highlighting increased movement speed, while decreasing error rate. Pointing throughput was unaffected by this speed-accuracy tradeoff. Highlighting also eliminated significant differences between selection coordinate depth deviation and the deviation in the two orthogonal axes.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129662758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
HOBS: head orientation-based selection in physical spaces HOBS:在物理空间中基于头部方向的选择
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659773
Ben Zhang, Yu-Hsiang Chen, Claire Tuna, Achal Dave, Yang Li, Edward A. Lee, Björn Hartmann
Emerging head-worn computing devices can enable interactions with smart objects in physical spaces. We present the iterative design and evaluation of HOBS -- a Head-Orientation Based Selection technique for interacting with these devices at a distance. We augment a commercial wearable device, Google Glass, with an infrared (IR) emitter to select targets equipped with IR receivers. Our first design shows that a naive IR implementation can outperform list selection, but has poor performance when refinement between multiple targets is needed. A second design uses IR intensity measurement at targets to improve refinement. To address the lack of natural mapping of on-screen target lists to spatial target location, our third design infers a spatial data structure of the targets enabling a natural head-motion based disambiguation. Finally, we demonstrate a universal remote control application using HOBS and report qualitative user impressions.
新兴的头戴式计算设备可以与物理空间中的智能对象进行交互。我们提出了HOBS的迭代设计和评估——一种基于头部方向的选择技术,用于与这些设备进行远距离交互。我们为商用可穿戴设备谷歌Glass增加了红外(IR)发射器,以选择配备红外接收器的目标。我们的第一个设计表明,朴素的IR实现可以优于列表选择,但当需要在多个目标之间进行细化时,性能很差。第二种设计是在目标处使用红外强度测量来提高精度。为了解决屏幕上目标列表与空间目标位置缺乏自然映射的问题,我们的第三个设计推断出目标的空间数据结构,从而实现基于自然头部运动的消歧。最后,我们展示了一个使用HOBS的通用远程控制应用程序,并报告了定性的用户印象。
{"title":"HOBS: head orientation-based selection in physical spaces","authors":"Ben Zhang, Yu-Hsiang Chen, Claire Tuna, Achal Dave, Yang Li, Edward A. Lee, Björn Hartmann","doi":"10.1145/2659766.2659773","DOIUrl":"https://doi.org/10.1145/2659766.2659773","url":null,"abstract":"Emerging head-worn computing devices can enable interactions with smart objects in physical spaces. We present the iterative design and evaluation of HOBS -- a Head-Orientation Based Selection technique for interacting with these devices at a distance. We augment a commercial wearable device, Google Glass, with an infrared (IR) emitter to select targets equipped with IR receivers. Our first design shows that a naive IR implementation can outperform list selection, but has poor performance when refinement between multiple targets is needed. A second design uses IR intensity measurement at targets to improve refinement. To address the lack of natural mapping of on-screen target lists to spatial target location, our third design infers a spatial data structure of the targets enabling a natural head-motion based disambiguation. Finally, we demonstrate a universal remote control application using HOBS and report qualitative user impressions.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131166963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Proposing a classification model for perceptual target selection on large displays 提出了一种用于大型显示器感知目标选择的分类模型
Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661216
Seungjae Oh, Heejin Kim, H. So
In this research, we propose a linear SVM classification model for perceptual distal target selection on large displays. The model is based on two simple features of users' finger movements reflecting users' visual perception of targets. The model shows the accuracy of 92.78% for predicting an intended target at end point.
在这项研究中,我们提出了一种线性支持向量机分类模型,用于大型显示器的感知远端目标选择。该模型基于用户手指运动的两个简单特征,反映了用户对目标的视觉感知。该模型对终点预期目标的预测准确率为92.78%。
{"title":"Proposing a classification model for perceptual target selection on large displays","authors":"Seungjae Oh, Heejin Kim, H. So","doi":"10.1145/2659766.2661216","DOIUrl":"https://doi.org/10.1145/2659766.2661216","url":null,"abstract":"In this research, we propose a linear SVM classification model for perceptual distal target selection on large displays. The model is based on two simple features of users' finger movements reflecting users' visual perception of targets. The model shows the accuracy of 92.78% for predicting an intended target at end point.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123641949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2nd ACM symposium on Spatial user interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1