首页 > 最新文献

Proceedings of the 2016 Symposium on Spatial User Interaction最新文献

英文 中文
Session details: Panel 会议详情:
Pub Date : 2016-10-15 DOI: 10.1145/3248577
Aitor Rovira
{"title":"Session details: Panel","authors":"Aitor Rovira","doi":"10.1145/3248577","DOIUrl":"https://doi.org/10.1145/3248577","url":null,"abstract":"","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130000217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KnowHow: Contextual Audio-Assistance for the Visually Impaired in Performing Everyday Tasks 知识诀窍:视障人士在日常工作中的情境性视听协助
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989196
A. Agarwal, Sujeath Pareddy, Swaminathan Manohar
We present a device for visually impaired persons (VIPs) that delivers contextual audio assistance for physical objects and tasks. In initial observations, we found ubiquitous use of audio-assistance technologies by VIPs for interacting with computing devices, such as Android TalkBack. However, we also saw that devices without screens frequently lack accessibility features. Our solution allows a VIP to obtain audio assistance in the presence of an arbitrary physical interface or object through a chest-mounted device. On-board are camera sensors that point towards the user's personal front-facing grasping region. Upon detecting certain gestures such as picking up an object, the device provides helpful contextual audio information to the user. Textual interfaces can be read aloud by sliding a finger over the surface of the object, allowing the user to hear a document or receive audio guidance for non-assistively-enabled electronic devices. The user may provide questions verbally in order to refine their audio assistance, or to ask broad questions about their environment. Our motivation is to provide sensemaking faculties that creatively approximate those of non-VIPs in tasks that make VIPs ineligible for common employment opportunities.
我们为视障人士(vip)提供一种设备,为物理对象和任务提供上下文音频辅助。在最初的观察中,我们发现vip普遍使用音频辅助技术与计算设备进行交互,例如Android TalkBack。然而,我们也发现,没有屏幕的设备往往缺乏可访问性功能。我们的解决方案允许VIP通过胸前安装的设备在任意物理接口或对象存在的情况下获得音频帮助。车载摄像头传感器指向用户个人的正面抓取区域。一旦检测到某些手势,比如拿起一个物体,设备就会向用户提供有用的上下文音频信息。文本界面可以通过在物体表面滑动手指来大声朗读,允许用户听到文档或接收非辅助电子设备的音频指导。用户可以口头提出问题,以完善他们的音频帮助,或者询问有关他们环境的广泛问题。我们的动机是提供语义能力,创造性地接近非贵宾的任务,使贵宾没有资格获得普通的就业机会。
{"title":"KnowHow: Contextual Audio-Assistance for the Visually Impaired in Performing Everyday Tasks","authors":"A. Agarwal, Sujeath Pareddy, Swaminathan Manohar","doi":"10.1145/2983310.2989196","DOIUrl":"https://doi.org/10.1145/2983310.2989196","url":null,"abstract":"We present a device for visually impaired persons (VIPs) that delivers contextual audio assistance for physical objects and tasks. In initial observations, we found ubiquitous use of audio-assistance technologies by VIPs for interacting with computing devices, such as Android TalkBack. However, we also saw that devices without screens frequently lack accessibility features. Our solution allows a VIP to obtain audio assistance in the presence of an arbitrary physical interface or object through a chest-mounted device. On-board are camera sensors that point towards the user's personal front-facing grasping region. Upon detecting certain gestures such as picking up an object, the device provides helpful contextual audio information to the user. Textual interfaces can be read aloud by sliding a finger over the surface of the object, allowing the user to hear a document or receive audio guidance for non-assistively-enabled electronic devices. The user may provide questions verbally in order to refine their audio assistance, or to ask broad questions about their environment. Our motivation is to provide sensemaking faculties that creatively approximate those of non-VIPs in tasks that make VIPs ineligible for common employment opportunities.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125328175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time Sign Language Recognition with Guided Deep Convolutional Neural Networks 基于深度卷积神经网络的实时手语识别
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989187
Zhengzhe Liu, Fuyang Huang, G. Tang, F. Sze, J. Qin, Xiaogang Wang, Qiang Xu
We develop a real-time, robust and accurate sign language recognition system leveraging deep convolutional neural networks(DCNN). Our framework is able to prevent common problems such as error accumulation of existing frameworks and it outperforms state-of-the-art frameworks in terms of accuracy, recognition time and usability.
我们利用深度卷积神经网络(DCNN)开发了一个实时、鲁棒和准确的手语识别系统。我们的框架能够防止诸如现有框架的错误积累等常见问题,并且在准确性,识别时间和可用性方面优于最先进的框架。
{"title":"Real-time Sign Language Recognition with Guided Deep Convolutional Neural Networks","authors":"Zhengzhe Liu, Fuyang Huang, G. Tang, F. Sze, J. Qin, Xiaogang Wang, Qiang Xu","doi":"10.1145/2983310.2989187","DOIUrl":"https://doi.org/10.1145/2983310.2989187","url":null,"abstract":"We develop a real-time, robust and accurate sign language recognition system leveraging deep convolutional neural networks(DCNN). Our framework is able to prevent common problems such as error accumulation of existing frameworks and it outperforms state-of-the-art frameworks in terms of accuracy, recognition time and usability.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129939384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Desktop Orbital Camera Motions Using Rotational Head Movements 桌面轨道相机运动使用旋转头部运动
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985758
Thibaut Jacob, G. Bailly, É. Lecolinet, Géry Casiez, M. Teyssier
In this paper, we investigate how head movements can serve to change the viewpoint in 3D applications, especially when the viewpoint needs to be changed quickly and temporarily to disambiguate the view. We study how to use yaw and roll head movements to perform orbital camera control, i.e., to rotate the camera around a specific point in the scene. We report on four user studies. Study 1 evaluates the useful resolution of head movements. Study 2 informs about visual and physical comfort. Study 3 compares two interaction techniques, designed by taking into account the results of the two previous studies. Results show that head roll is more efficient than head yaw for orbital camera control when interacting with a screen. Finally, Study 4 compares head roll with a standard technique relying on the mouse and the keyboard. Moreover, users were allowed to use both techniques at their convenience in a second stage. Results show that users prefer and are faster (14.5%) with the head control technique.
在本文中,我们研究了头部运动如何在3D应用中改变视点,特别是当视点需要快速和暂时地改变以消除视图歧义时。我们研究如何使用偏航和滚动头部运动来执行轨道摄像机控制,即围绕场景中的特定点旋转摄像机。我们报告了四项用户研究。研究1评估了头部运动的有用分辨率。研究2告诉我们视觉和身体舒适度。研究3比较了两种交互技术,设计时考虑了前两项研究的结果。结果表明,当与屏幕交互作用时,头滚比头偏航更有效地控制轨道相机。最后,研究4将滚动头部与依赖鼠标和键盘的标准技术进行了比较。此外,允许用户在第二阶段方便时使用这两种技术。结果表明,用户更喜欢头部控制技术,并且速度更快(14.5%)。
{"title":"Desktop Orbital Camera Motions Using Rotational Head Movements","authors":"Thibaut Jacob, G. Bailly, É. Lecolinet, Géry Casiez, M. Teyssier","doi":"10.1145/2983310.2985758","DOIUrl":"https://doi.org/10.1145/2983310.2985758","url":null,"abstract":"In this paper, we investigate how head movements can serve to change the viewpoint in 3D applications, especially when the viewpoint needs to be changed quickly and temporarily to disambiguate the view. We study how to use yaw and roll head movements to perform orbital camera control, i.e., to rotate the camera around a specific point in the scene. We report on four user studies. Study 1 evaluates the useful resolution of head movements. Study 2 informs about visual and physical comfort. Study 3 compares two interaction techniques, designed by taking into account the results of the two previous studies. Results show that head roll is more efficient than head yaw for orbital camera control when interacting with a screen. Finally, Study 4 compares head roll with a standard technique relying on the mouse and the keyboard. Moreover, users were allowed to use both techniques at their convenience in a second stage. Results show that users prefer and are faster (14.5%) with the head control technique.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124489647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Inducing Body-Transfer Illusions in VR by Providing Brief Phases of Visual-Tactile Stimulation 通过提供短暂的视触觉刺激来诱导VR中的身体转移幻觉
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985760
Oscar Ariza, J. Freiwald, Nadine Laage, M. Feist, Mariam Salloum, G. Bruder, Frank Steinicke
Current developments in the area of virtual reality (VR) allow numerous users to experience immersive virtual environments (VEs) in a broad range of application fields. In the same way, some research has shown novel advances in wearable devices to provide vibrotactile feedback which can be combined with low-cost technology for hand tracking and gestures recognition. The combination of these technologies can be used to investigate interesting psychological illusions. For instance, body-transfer illusions, such as the rubber-hand illusion or elongated-arm illusion, have shown that it is possible to give a person the persistent illusion of body transfer after only brief phases of synchronized visual-haptic stimulation. The motivation of this paper is to induce such perceptual illusions by combining VR, vibrotactile and tracking technologies, offering an interesting way to create new spatial interaction experiences centered on the senses of sight and touch. We present a technology framework that includes a pair of self-made gloves featuring vibrotactile feedback that can be synchronized with audio-visual stimulation in order to reproduce body-transfer illusions in VR. We present in detail the implementation of the framework and show that the proposed technology setup is able to induce the elongated-arm illusion providing automatic tactile stimuli, instead of the traditional approach based on manually synchronized stimulation.
当前虚拟现实(VR)领域的发展使众多用户能够在广泛的应用领域中体验沉浸式虚拟环境(ve)。同样,一些研究表明,可穿戴设备在提供振动触觉反馈方面取得了新的进展,这种反馈可以与低成本的手部跟踪和手势识别技术相结合。这些技术的结合可以用来研究有趣的心理错觉。例如,身体转移错觉,如橡胶手错觉或长臂错觉,已经表明,仅仅在短暂的同步视觉-触觉刺激阶段之后,就有可能给人持续的身体转移错觉。本文的动机是通过结合VR、振动触觉和跟踪技术来诱导这种感知错觉,为创造以视觉和触觉为中心的新的空间交互体验提供一种有趣的方式。我们提出了一种技术框架,其中包括一对自制的具有振动触觉反馈的手套,该手套可以与视听刺激同步,以便在VR中再现身体转移幻觉。我们详细介绍了该框架的实现,并表明所提出的技术设置能够通过提供自动触觉刺激来诱导长臂错觉,而不是基于手动同步刺激的传统方法。
{"title":"Inducing Body-Transfer Illusions in VR by Providing Brief Phases of Visual-Tactile Stimulation","authors":"Oscar Ariza, J. Freiwald, Nadine Laage, M. Feist, Mariam Salloum, G. Bruder, Frank Steinicke","doi":"10.1145/2983310.2985760","DOIUrl":"https://doi.org/10.1145/2983310.2985760","url":null,"abstract":"Current developments in the area of virtual reality (VR) allow numerous users to experience immersive virtual environments (VEs) in a broad range of application fields. In the same way, some research has shown novel advances in wearable devices to provide vibrotactile feedback which can be combined with low-cost technology for hand tracking and gestures recognition. The combination of these technologies can be used to investigate interesting psychological illusions. For instance, body-transfer illusions, such as the rubber-hand illusion or elongated-arm illusion, have shown that it is possible to give a person the persistent illusion of body transfer after only brief phases of synchronized visual-haptic stimulation. The motivation of this paper is to induce such perceptual illusions by combining VR, vibrotactile and tracking technologies, offering an interesting way to create new spatial interaction experiences centered on the senses of sight and touch. We present a technology framework that includes a pair of self-made gloves featuring vibrotactile feedback that can be synchronized with audio-visual stimulation in order to reproduce body-transfer illusions in VR. We present in detail the implementation of the framework and show that the proposed technology setup is able to induce the elongated-arm illusion providing automatic tactile stimuli, instead of the traditional approach based on manually synchronized stimulation.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121088527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Touching the Sphere: Leveraging Joint-Centered Kinespheres for Spatial User Interaction 触摸球体:利用关节为中心的Kinespheres空间用户交互
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985753
Paul Lubos, G. Bruder, Oscar Ariza, Frank Steinicke
Designing spatial user interfaces for virtual reality (VR) applications that are intuitive, comfortable and easy to use while at the same time providing high task performance is a challenging task. This challenge is even harder to solve since perception and action in immersive virtual environments differ significantly from the real world, causing natural user interfaces to elicit a dissociation of perceptual and motor space as well as levels of discomfort and fatigue unknown in the real world. In this paper, we present and evaluate the novel method to leverage joint-centered kinespheres for interactive spatial applications. We introduce kinespheres within arm's reach that envelope the reachable space for each joint such as shoulder, elbow or wrist, thus defining 3D interactive volumes with the boundaries given by 2D manifolds. We present a Fitts' Law experiment in which we evaluated the spatial touch performance on the inside and on the boundary of the main joint-centered kinespheres. Moreover, we present a confirmatory experiment in which we compared joint-centered interaction with traditional spatial head-centered menus. Finally, we discuss the advantages and limitations of placing interactive graphical elements relative to joint positions and, in particular, on the boundaries of kinespheres.
为虚拟现实(VR)应用程序设计直观、舒适和易于使用的空间用户界面,同时提供高任务性能是一项具有挑战性的任务。这一挑战更难解决,因为沉浸式虚拟环境中的感知和行动与现实世界有很大不同,导致自然用户界面引发感知和运动空间的分离,以及现实世界中未知的不适和疲劳程度。在本文中,我们提出并评估了利用关节中心动球进行交互式空间应用的新方法。我们在手臂可及的范围内引入了运动球,这些运动球包围了每个关节(如肩膀、肘部或手腕)的可及空间,从而定义了具有2D流形边界的3D交互体。我们提出了一个菲茨定律实验,在这个实验中,我们评估了在主要关节中心运动球的内部和边界上的空间触摸性能。此外,我们提出了一个验证性实验,我们比较了关节为中心的互动与传统的空间头部为中心的菜单。最后,我们讨论了放置交互图形元素相对于关节位置的优点和局限性,特别是在动球的边界上。
{"title":"Touching the Sphere: Leveraging Joint-Centered Kinespheres for Spatial User Interaction","authors":"Paul Lubos, G. Bruder, Oscar Ariza, Frank Steinicke","doi":"10.1145/2983310.2985753","DOIUrl":"https://doi.org/10.1145/2983310.2985753","url":null,"abstract":"Designing spatial user interfaces for virtual reality (VR) applications that are intuitive, comfortable and easy to use while at the same time providing high task performance is a challenging task. This challenge is even harder to solve since perception and action in immersive virtual environments differ significantly from the real world, causing natural user interfaces to elicit a dissociation of perceptual and motor space as well as levels of discomfort and fatigue unknown in the real world. In this paper, we present and evaluate the novel method to leverage joint-centered kinespheres for interactive spatial applications. We introduce kinespheres within arm's reach that envelope the reachable space for each joint such as shoulder, elbow or wrist, thus defining 3D interactive volumes with the boundaries given by 2D manifolds. We present a Fitts' Law experiment in which we evaluated the spatial touch performance on the inside and on the boundary of the main joint-centered kinespheres. Moreover, we present a confirmatory experiment in which we compared joint-centered interaction with traditional spatial head-centered menus. Finally, we discuss the advantages and limitations of placing interactive graphical elements relative to joint positions and, in particular, on the boundaries of kinespheres.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Haptic Exploration of Remote Environments with Gesture-based Collaborative Guidance 基于手势协同引导的远程环境触觉探索
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989201
Seokyeol Kim, Jinah Park
We present a collaborative haptic interaction method for exploring a remote physical environment with guidance from a distant helper. Spatial information, which is represented by a point cloud, of the remote environment is directly rendered as a contact force without reconstruction of surfaces. On top of this, the helper can selectively exert an attractive force for reaching a target or a repulsive force for avoiding a forbidden region to the user by using free-hand gestures.
我们提出了一种协作触觉交互方法,用于在远程助手的指导下探索远程物理环境。远程环境的空间信息,由点云表示,直接渲染为接触力,不需要重建表面。在此基础上,助手可以通过使用自由手势选择性地对用户施加达到目标的吸引力或避免禁止区域的排斥力。
{"title":"Haptic Exploration of Remote Environments with Gesture-based Collaborative Guidance","authors":"Seokyeol Kim, Jinah Park","doi":"10.1145/2983310.2989201","DOIUrl":"https://doi.org/10.1145/2983310.2989201","url":null,"abstract":"We present a collaborative haptic interaction method for exploring a remote physical environment with guidance from a distant helper. Spatial information, which is represented by a point cloud, of the remote environment is directly rendered as a contact force without reconstruction of surfaces. On top of this, the helper can selectively exert an attractive force for reaching a target or a repulsive force for avoiding a forbidden region to the user by using free-hand gestures.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TickTockRay Demo: Smartwatch Raycasting for Mobile HMDs TickTockRay演示:移动头戴式显示器的智能手表Raycasting
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989206
D. Kharlamov, Krzysztof Pietroszek, Liudmila Tahai
We demonstrate TickTockRay, an implementation of fixed-origin raycasting technique that utilizes a smartwatch as an input device. We show that a smartwatch-based raycasting is a good alternative to a head-rotation-controlled cursor or a specialized input device. TickTockRay implements fixed-origin raycasting with the ray originating from a fixed point, located, roughly, in the user's chest. The control-display (C/D) ratio of TickTockRay technique is set to 1, with exact correspondence between the ray and the smartwatch's rotation. Such C/D ratio enables a user to select targets in the entire virtual reality control space.
我们演示了TickTockRay,这是一种利用智能手表作为输入设备的固定原点光线投射技术的实现。我们表明,基于智能手表的光线投射是一个很好的替代头部旋转控制的光标或专门的输入设备。TickTockRay实现了固定原点光线投射,光线起源于一个固定点,大致位于用户的胸部。TickTockRay技术的控制显示(C/D)比率设置为1,光线与智能手表的旋转之间精确对应。这样的C/D比值使得用户可以在整个虚拟现实控制空间中选择目标。
{"title":"TickTockRay Demo: Smartwatch Raycasting for Mobile HMDs","authors":"D. Kharlamov, Krzysztof Pietroszek, Liudmila Tahai","doi":"10.1145/2983310.2989206","DOIUrl":"https://doi.org/10.1145/2983310.2989206","url":null,"abstract":"We demonstrate TickTockRay, an implementation of fixed-origin raycasting technique that utilizes a smartwatch as an input device. We show that a smartwatch-based raycasting is a good alternative to a head-rotation-controlled cursor or a specialized input device. TickTockRay implements fixed-origin raycasting with the ray originating from a fixed point, located, roughly, in the user's chest. The control-display (C/D) ratio of TickTockRay technique is set to 1, with exact correspondence between the ray and the smartwatch's rotation. Such C/D ratio enables a user to select targets in the entire virtual reality control space.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126096889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Moving Ahead with Peephole Pointing: Modelling Object Selection with Head-Worn Display Field of View Limitations 向前移动窥视孔指向:建模对象选择与头戴式显示视野限制
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2985756
Barrett Ens, David Ahlström, Pourang Irani
Head-worn displays (HWDs) are now becoming widely available, which will allow researchers to explore sophisticated interface designs that support rich user productivity features. In a large virtual workspace, the limited available field of view (FoV) may cause objects to be located outside of the available viewing area, requiring users to first locate an item using head motion before making a selection. However, FoV varies widely across different devices, with an unknown impact on interface usability. We present a user study to test two-step selection models previously proposed for "peephole pointing" in large virtual workspaces on mobile devices. Using a CAVE environment to simulate the FoV restriction of stereoscopic HWDs, we compare two different input methods, direct pointing, and raycasting in a selection task with varying FoV width. We find a very strong fit in this context, comparable to the prediction accuracy in the original studies, and much more accurate than the traditional Fitts' law model. We detect an advantage of direct pointing over raycasting, particularly with small targets. Moreover, we find that this advantage of direct pointing diminishes with decreasing FoV.
头戴式显示器(hwd)现已广泛使用,这将使研究人员能够探索复杂的界面设计,以支持丰富的用户生产力功能。在大型虚拟工作空间中,有限的可用视场(FoV)可能会导致物体位于可用观看区域之外,要求用户在做出选择之前首先使用头部运动来定位物品。然而,FoV在不同设备上差异很大,这对界面可用性的影响是未知的。我们提出了一项用户研究,以测试之前提出的在移动设备上的大型虚拟工作空间中“窥视孔指向”的两步选择模型。使用CAVE环境来模拟立体hwd的视场限制,我们比较了两种不同的输入方法,直接指向和光线投射在不同视场宽度的选择任务中。我们发现在这种情况下有很强的拟合性,与原始研究的预测精度相当,比传统的菲茨定律模型准确得多。我们发现直接指向比光线投射更有优势,特别是对于小目标。此外,我们发现这种直接指向的优势随着视场的减小而减弱。
{"title":"Moving Ahead with Peephole Pointing: Modelling Object Selection with Head-Worn Display Field of View Limitations","authors":"Barrett Ens, David Ahlström, Pourang Irani","doi":"10.1145/2983310.2985756","DOIUrl":"https://doi.org/10.1145/2983310.2985756","url":null,"abstract":"Head-worn displays (HWDs) are now becoming widely available, which will allow researchers to explore sophisticated interface designs that support rich user productivity features. In a large virtual workspace, the limited available field of view (FoV) may cause objects to be located outside of the available viewing area, requiring users to first locate an item using head motion before making a selection. However, FoV varies widely across different devices, with an unknown impact on interface usability. We present a user study to test two-step selection models previously proposed for \"peephole pointing\" in large virtual workspaces on mobile devices. Using a CAVE environment to simulate the FoV restriction of stereoscopic HWDs, we compare two different input methods, direct pointing, and raycasting in a selection task with varying FoV width. We find a very strong fit in this context, comparable to the prediction accuracy in the original studies, and much more accurate than the traditional Fitts' law model. We detect an advantage of direct pointing over raycasting, particularly with small targets. Moreover, we find that this advantage of direct pointing diminishes with decreasing FoV.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"444 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125765296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Arm-Hidden Private Area on an Interactive Tabletop System 交互式桌面系统中的手臂隐藏私人区域
Pub Date : 2016-10-15 DOI: 10.1145/2983310.2989194
Kai Li, Asako Kimura, F. Shibata
Tabletop systems are used primarily in meetings or other activities wherein information is shared. However, when confidential input is needed, for example when entering a password, privacy becomes an issue. In this study, we use the shadowed area nearby the forearm when the user places their forearm on the tabletop. And our tabletop security system is using that hidden-area to show a confidential information window. We also introduce several potential applications for this hidden-area system.
桌面系统主要用于会议或其他信息共享的活动。但是,当需要机密输入时,例如输入密码时,隐私就成了问题。在本研究中,当用户将前臂放在桌面上时,我们使用前臂附近的阴影区域。我们的桌面安全系统利用这个隐藏区域来显示一个机密信息窗口。我们还介绍了该隐藏区域系统的几种潜在应用。
{"title":"Arm-Hidden Private Area on an Interactive Tabletop System","authors":"Kai Li, Asako Kimura, F. Shibata","doi":"10.1145/2983310.2989194","DOIUrl":"https://doi.org/10.1145/2983310.2989194","url":null,"abstract":"Tabletop systems are used primarily in meetings or other activities wherein information is shared. However, when confidential input is needed, for example when entering a password, privacy becomes an issue. In this study, we use the shadowed area nearby the forearm when the user places their forearm on the tabletop. And our tabletop security system is using that hidden-area to show a confidential information window. We also introduce several potential applications for this hidden-area system.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133817762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2016 Symposium on Spatial User Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1