首页 > 最新文献

Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Portal-ble: Intuitive Free-hand Manipulation in Unbounded Smartphone-based Augmented Reality Portal-ble:基于无限智能手机的增强现实中的直观自由操作
Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, J. Tompkin, J. Hughes, Jeff Huang
Smartphone augmented reality (AR) lets users interact with physical and virtual spaces simultaneously. With 3D hand tracking, smartphones become apparatus to grab and move virtual objects directly. Based on design considerations for interaction, mobility, and object appearance and physics, we implemented a prototype for portable 3D hand tracking using a smartphone, a Leap Motion controller, and a computation unit. Following an experience prototyping procedure, 12 researchers used the prototype to help explore usability issues and define the design space. We identified issues in perception (moving to the object, reaching for the object), manipulation (successfully grabbing and orienting the object), and behavioral understanding (knowing how to use the smartphone as a viewport). To overcome these issues, we designed object-based feedback and accommodation mechanisms and studied their perceptual and behavioral effects via two tasks: picking up distant objects, and assembling a virtual house from blocks. Our mechanisms enabled significantly faster and more successful user interaction than the initial prototype in picking up and manipulating stationary and moving objects, with a lower cognitive load and greater user preference. The resulting system---Portal-ble---improves user intuition and aids free-hand interactions in mobile situations.
智能手机增强现实(AR)可以让用户同时与物理和虚拟空间进行交互。借助3D手部追踪技术,智能手机成为了直接抓取和移动虚拟物体的设备。基于对交互、移动性、物体外观和物理的设计考虑,我们使用智能手机、Leap Motion控制器和计算单元实现了便携式3D手部跟踪的原型。12名研究人员按照体验原型程序,使用原型来帮助探索可用性问题和定义设计空间。我们确定了感知(移动到物体,到达物体),操作(成功抓取和定位物体)和行为理解(知道如何使用智能手机作为视口)方面的问题。为了克服这些问题,我们设计了基于对象的反馈和调节机制,并通过两个任务研究了它们的感知和行为效应:拾取远处的物体和用积木组装虚拟房屋。与最初的原型相比,我们的机制在拾取和操纵静止和移动物体方面实现了更快、更成功的用户交互,并且具有更低的认知负荷和更大的用户偏好。由此产生的系统Portal-ble提高了用户的直觉,并有助于在移动环境中进行徒手交互。
{"title":"Portal-ble: Intuitive Free-hand Manipulation in Unbounded Smartphone-based Augmented Reality","authors":"Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, J. Tompkin, J. Hughes, Jeff Huang","doi":"10.1145/3332165.3347904","DOIUrl":"https://doi.org/10.1145/3332165.3347904","url":null,"abstract":"Smartphone augmented reality (AR) lets users interact with physical and virtual spaces simultaneously. With 3D hand tracking, smartphones become apparatus to grab and move virtual objects directly. Based on design considerations for interaction, mobility, and object appearance and physics, we implemented a prototype for portable 3D hand tracking using a smartphone, a Leap Motion controller, and a computation unit. Following an experience prototyping procedure, 12 researchers used the prototype to help explore usability issues and define the design space. We identified issues in perception (moving to the object, reaching for the object), manipulation (successfully grabbing and orienting the object), and behavioral understanding (knowing how to use the smartphone as a viewport). To overcome these issues, we designed object-based feedback and accommodation mechanisms and studied their perceptual and behavioral effects via two tasks: picking up distant objects, and assembling a virtual house from blocks. Our mechanisms enabled significantly faster and more successful user interaction than the initial prototype in picking up and manipulating stationary and moving objects, with a lower cognitive load and greater user preference. The resulting system---Portal-ble---improves user intuition and aids free-hand interactions in mobile situations.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127627848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
milliMorph -- Fluid-Driven Thin Film Shape-Change Materials for Interaction Design 用于交互设计的流体驱动薄膜形状变化材料
Qiuyu Lu, Jifei Ou, João Wilbert, André Haben, Haipeng Mi, H. Ishii
This paper presents a design space, a fabrication system and applications of creating fluidic chambers and channels at millimeter scale for tangible actuated interfaces. The ability to design and fabricate millifluidic chambers allows one to create high frequency actuation, sequential control of flows and high resolution design on thin film materials. We propose a four dimensional design space of creating these fluidic chambers, a novel heat sealing system that enables easy and precise millifluidics fabrication, and application demonstrations of the fabricated materials for haptics, ambient devices and robotics. As shape-change materials are increasingly integrated in designing novel interfaces, milliMorph enriches the library of fluid-driven shape-change materials, and demonstrates new design opportunities that is unique at millimeter scale for product and interaction design.
本文介绍了一种设计空间、制造系统和在毫米尺度上为有形驱动界面创建流体室和通道的应用。设计和制造毫流体室的能力允许人们在薄膜材料上创建高频驱动,流的顺序控制和高分辨率设计。我们提出了创建这些流体室的四维设计空间,一种新颖的热密封系统,使微流体制造变得简单和精确,以及制造材料在触觉,环境设备和机器人技术中的应用演示。随着形状变化材料越来越多地集成到设计新颖的界面中,milorph丰富了流体驱动形状变化材料的库,并展示了在毫米尺度上独特的产品和交互设计的新设计机会。
{"title":"milliMorph -- Fluid-Driven Thin Film Shape-Change Materials for Interaction Design","authors":"Qiuyu Lu, Jifei Ou, João Wilbert, André Haben, Haipeng Mi, H. Ishii","doi":"10.1145/3332165.3347956","DOIUrl":"https://doi.org/10.1145/3332165.3347956","url":null,"abstract":"This paper presents a design space, a fabrication system and applications of creating fluidic chambers and channels at millimeter scale for tangible actuated interfaces. The ability to design and fabricate millifluidic chambers allows one to create high frequency actuation, sequential control of flows and high resolution design on thin film materials. We propose a four dimensional design space of creating these fluidic chambers, a novel heat sealing system that enables easy and precise millifluidics fabrication, and application demonstrations of the fabricated materials for haptics, ambient devices and robotics. As shape-change materials are increasingly integrated in designing novel interfaces, milliMorph enriches the library of fluid-driven shape-change materials, and demonstrates new design opportunities that is unique at millimeter scale for product and interaction design.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125264876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection 眼睛和头部:眼睛和头部协同运动的凝视指向和选择
Ludwig Sidenmark, Hans-Werner Gellersen
Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.
眼球注视涉及眼球和头部运动的协调以获取注视目标,但现有的注视指向方法是基于眼球跟踪,对头部运动进行抽象。我们建议利用眼和头的协同运动,并确定眼和头注视交互的设计原则。我们介绍了三种新技术,它们建立在区分头部支持和眼睛支持的注视的基础上,以实现注视和指针的动态耦合,悬停交互,围绕预选择的视觉探索以及迭代和快速确认目标。我们在虚拟现实应用中演示了眼与头的交互,并在指向和确认研究中根据基线评估了我们的技术。我们的研究结果表明,Eye&Head技术实现了新的凝视行为,为用户提供了更多的控制和灵活性,可以快速指向和选择凝视。
{"title":"Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection","authors":"Ludwig Sidenmark, Hans-Werner Gellersen","doi":"10.1145/3332165.3347921","DOIUrl":"https://doi.org/10.1145/3332165.3347921","url":null,"abstract":"Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124655362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Gaze-Assisted Typing for Smart Glasses 智能眼镜的注视辅助打字
Sunggeun Ahn, Geehyuk Lee
Text entry is expected to be a common task for smart glass users, which is generally performed using a touchpad on the temple or by a promising approach using eye tracking. However, each approach has its own limitations. For more efficient text entry, we present the concept of gaze-assisted typing (GAT), which uses both a touchpad and eye tracking. We initially examined GAT with a minimal eye input load, and demonstrated that the GAT technology was 51% faster than a two-step touch input typing method (i.e.,M-SwipeBoard: 5.85 words per minute (wpm) and GAT: 8.87 wpm). We also compared GAT methods with varying numbers of touch gestures. The results showed that a GAT requiring five different touch gestures was the most preferred, although all GAT techniques were equally efficient. Finally, we compared GAT with touch-only typing (SwipeZone) and eye-only typing (adjustable dwell) using an eye-trackable head-worn display. The results demonstrate that the most preferred technique, GAT, was 25.4% faster than the eye-only typing and 29.4% faster than the touch-only typing (GAT: 11.04 wpm, eye-only typing: 8.81 wpm, and touch-only typing: 8.53 wpm).
文本输入预计将成为智能眼镜用户的一项常见任务,通常使用太阳穴上的触摸板或使用眼动追踪的有前途的方法来执行。然而,每种方法都有其局限性。为了更有效地输入文本,我们提出了注视辅助打字(GAT)的概念,它同时使用触摸板和眼动追踪。我们最初在最小的眼睛输入负荷下测试了GAT,并证明GAT技术比两步触摸输入打字方法(即M-SwipeBoard: 5.85个单词/分钟(wpm)和GAT: 8.87个单词/分钟)快51%。我们还将GAT方法与不同数量的触摸手势进行了比较。结果显示,需要五种不同触控手势的GAT是最受欢迎的,尽管所有GAT技术的效率都是一样的。最后,我们使用眼控头戴式显示器将GAT与纯触摸打字(SwipeZone)和纯眼睛打字(可调节的dwell)进行了比较。结果表明,GAT比纯眼打字快25.4%,比纯触打字快29.4% (GAT: 11.04 wpm,纯眼打字:8.81 wpm,纯触打字:8.53 wpm)。
{"title":"Gaze-Assisted Typing for Smart Glasses","authors":"Sunggeun Ahn, Geehyuk Lee","doi":"10.1145/3332165.3347883","DOIUrl":"https://doi.org/10.1145/3332165.3347883","url":null,"abstract":"Text entry is expected to be a common task for smart glass users, which is generally performed using a touchpad on the temple or by a promising approach using eye tracking. However, each approach has its own limitations. For more efficient text entry, we present the concept of gaze-assisted typing (GAT), which uses both a touchpad and eye tracking. We initially examined GAT with a minimal eye input load, and demonstrated that the GAT technology was 51% faster than a two-step touch input typing method (i.e.,M-SwipeBoard: 5.85 words per minute (wpm) and GAT: 8.87 wpm). We also compared GAT methods with varying numbers of touch gestures. The results showed that a GAT requiring five different touch gestures was the most preferred, although all GAT techniques were equally efficient. Finally, we compared GAT with touch-only typing (SwipeZone) and eye-only typing (adjustable dwell) using an eye-trackable head-worn display. The results demonstrate that the most preferred technique, GAT, was 25.4% faster than the eye-only typing and 29.4% faster than the touch-only typing (GAT: 11.04 wpm, eye-only typing: 8.81 wpm, and touch-only typing: 8.53 wpm).","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light Probe 使用360相机作为光探针在捕捉时间优化人像照明
E. JaneL., Ohad Fried, Maneesh Agrawala
We present a capture-time tool designed to help casual photographers orient their subject to achieve a user-specified target facial appearance. The inputs to our tool are an HDR environment map of the scene captured using a 360 camera, and a target facial appearance, selected from a gallery of common studio lighting styles. Our tool computes the optimal orientation for the subject to achieve the target lighting using a computationally efficient precomputed radiance transfer-based approach. It then tells the photographer how far to rotate about the subject. Optionally, our tool can suggest how to orient a secondary external light source (e.g. a phone screen) about the subject's face to further improve the match to the target lighting. We demonstrate the effectiveness of our approach in a variety of indoor and outdoor scenes using many different subjects to achieve a variety of looks. A user evaluation suggests that our tool reduces the mental effort required by photographers to produce well-lit portraits.
我们提出了一个捕捉时间的工具,旨在帮助休闲摄影师定向他们的主题,以实现用户指定的目标面部外观。我们的工具的输入是使用360摄像机捕获的场景的HDR环境地图,以及从常见工作室照明风格画廊中选择的目标面部外观。我们的工具计算主体的最佳方向,以实现目标照明使用计算高效的预先计算的辐射转移为基础的方法。然后它告诉摄影师围绕主体旋转多远。可选地,我们的工具可以建议如何定位次要外部光源(例如手机屏幕),以进一步改善与目标照明的匹配。我们在各种室内和室外场景中展示了我们的方法的有效性,使用许多不同的主题来实现各种外观。用户评价表明,我们的工具减少了摄影师制作光线良好的肖像所需的脑力劳动。
{"title":"Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light Probe","authors":"E. JaneL., Ohad Fried, Maneesh Agrawala","doi":"10.1145/3332165.3347893","DOIUrl":"https://doi.org/10.1145/3332165.3347893","url":null,"abstract":"We present a capture-time tool designed to help casual photographers orient their subject to achieve a user-specified target facial appearance. The inputs to our tool are an HDR environment map of the scene captured using a 360 camera, and a target facial appearance, selected from a gallery of common studio lighting styles. Our tool computes the optimal orientation for the subject to achieve the target lighting using a computationally efficient precomputed radiance transfer-based approach. It then tells the photographer how far to rotate about the subject. Optionally, our tool can suggest how to orient a secondary external light source (e.g. a phone screen) about the subject's face to further improve the match to the target lighting. We demonstrate the effectiveness of our approach in a variety of indoor and outdoor scenes using many different subjects to achieve a variety of looks. A user evaluation suggests that our tool reduces the mental effort required by photographers to produce well-lit portraits.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130336433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Plane, Ray, and Point: Enabling Precise Spatial Manipulations with Shape Constraints 平面,光线和点:使用形状约束实现精确的空间操作
Devamardeep Hayatpur, Seongkook Heo, Haijun Xia, W. Stuerzlinger, Daniel J. Wigdor
We present Plane, Ray, and Point, a set of interaction techniques that utilizes shape constraints to enable quick and precise object alignment and manipulation in virtual reality. Users create the three types of shape constraints, Plane, Ray, and Point, by using symbolic gestures. The shape constraints are used like scaffoldings and limit and guide the movement of virtual objects that collide or intersect with them. The same set of gestures can be performed with the other hand, which allow users to further control the degrees of freedom for precise and constrained manipulation. The combination of shape constraints and bimanual gestures yield a rich set of interaction techniques to support object transformation. An exploratory study conducted with 3D design experts and novice users found the techniques to be useful in 3D scene design workflows and easy to learn and use.
我们介绍了平面、光线和点,这是一组利用形状约束的交互技术,可以在虚拟现实中实现快速、精确的对象对齐和操作。用户通过使用符号手势创建三种类型的形状约束:平面、光线和点。形状约束就像脚手架一样,限制和引导与它们碰撞或相交的虚拟物体的运动。同样的手势可以用另一只手执行,这允许用户进一步控制自由度,以实现精确和受限的操作。形状约束和双手手势的结合产生了一套丰富的交互技术来支持对象转换。一项与3D设计专家和新手用户进行的探索性研究发现,这些技术在3D场景设计工作流程中很有用,并且易于学习和使用。
{"title":"Plane, Ray, and Point: Enabling Precise Spatial Manipulations with Shape Constraints","authors":"Devamardeep Hayatpur, Seongkook Heo, Haijun Xia, W. Stuerzlinger, Daniel J. Wigdor","doi":"10.1145/3332165.3347916","DOIUrl":"https://doi.org/10.1145/3332165.3347916","url":null,"abstract":"We present Plane, Ray, and Point, a set of interaction techniques that utilizes shape constraints to enable quick and precise object alignment and manipulation in virtual reality. Users create the three types of shape constraints, Plane, Ray, and Point, by using symbolic gestures. The shape constraints are used like scaffoldings and limit and guide the movement of virtual objects that collide or intersect with them. The same set of gestures can be performed with the other hand, which allow users to further control the degrees of freedom for precise and constrained manipulation. The combination of shape constraints and bimanual gestures yield a rich set of interaction techniques to support object transformation. An exploratory study conducted with 3D design experts and novice users found the techniques to be useful in 3D scene design workflows and easy to learn and use.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131571782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
DreamWalker: Substituting Real-World Walking Experiences with a Virtual Reality 梦行者:用虚拟现实代替真实世界的行走体验
Jackie Yang, Christian Holz, E. Ofek, Andrew D. Wilson
We explore a future in which people spend considerably more time in virtual reality, even during moments when they transition between locations in the real world. In this paper, we present DreamWalker, a VR system that enables such real-world walking while users explore and stay fully immersed inside large virtual environments in a headset. Provided with a real-world destination, DreamWalker finds a similar path in a pre-authored VR environment and guides the user while real-walking the virtual world. To keep the user from colliding with objects and people in the real-world, DreamWalker's tracking system fuses GPS locations, inside-out tracking, and RGBD frames to 1) continuously and accurately position the user in the real world, 2) sense walkable paths and obstacles in real time, and 3) represent paths through a dynamically changing scene in VR to redirect the user towards the chosen destination. We demonstrate DreamWalker's versatility by enabling users to walk three paths across the large Microsoft campus while enjoying pre-authored VR worlds, supplemented with a variety of obstacle avoidance and redirection techniques. In our evaluation, 8 participants walked across campus along a 15-minute route, experiencing a lively virtual Manhattan that was full of animated cars, people, and other objects.
我们探索的未来是,人们在虚拟现实中花费更多的时间,即使是在现实世界中不同地点之间转换的时刻。在本文中,我们介绍了DreamWalker,这是一个虚拟现实系统,当用户在头戴式设备中探索并完全沉浸在大型虚拟环境中时,它可以实现真实世界的行走。DreamWalker提供了一个现实世界的目的地,在预先编写的VR环境中找到类似的路径,并引导用户在虚拟世界中真实行走。为了防止用户在现实世界中与物体和人发生碰撞,DreamWalker的跟踪系统融合了GPS定位,由内到外跟踪和RGBD帧,以1)在现实世界中连续准确地定位用户,2)实时感知可行走的路径和障碍物,3)在VR中动态变化的场景中表示路径,将用户重定向到所选择的目的地。我们展示了DreamWalker的多功能性,让用户在享受预创作的VR世界的同时,可以在微软园区内行走三条路径,并补充了各种避障和重定向技术。在我们的评估中,8名参与者沿着15分钟的路线穿过校园,体验一个充满活力的虚拟曼哈顿,那里充满了动画汽车、人物和其他物体。
{"title":"DreamWalker: Substituting Real-World Walking Experiences with a Virtual Reality","authors":"Jackie Yang, Christian Holz, E. Ofek, Andrew D. Wilson","doi":"10.1145/3332165.3347875","DOIUrl":"https://doi.org/10.1145/3332165.3347875","url":null,"abstract":"We explore a future in which people spend considerably more time in virtual reality, even during moments when they transition between locations in the real world. In this paper, we present DreamWalker, a VR system that enables such real-world walking while users explore and stay fully immersed inside large virtual environments in a headset. Provided with a real-world destination, DreamWalker finds a similar path in a pre-authored VR environment and guides the user while real-walking the virtual world. To keep the user from colliding with objects and people in the real-world, DreamWalker's tracking system fuses GPS locations, inside-out tracking, and RGBD frames to 1) continuously and accurately position the user in the real world, 2) sense walkable paths and obstacles in real time, and 3) represent paths through a dynamically changing scene in VR to redirect the user towards the chosen destination. We demonstrate DreamWalker's versatility by enabling users to walk three paths across the large Microsoft campus while enjoying pre-authored VR worlds, supplemented with a variety of obstacle avoidance and redirection techniques. In our evaluation, 8 participants walked across campus along a 15-minute route, experiencing a lively virtual Manhattan that was full of animated cars, people, and other objects.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122968324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Session details: Session 5A: Statistics and Interactive Machine Learning 会议详情:5A:统计和交互式机器学习
Scott R. Klemmer
{"title":"Session details: Session 5A: Statistics and Interactive Machine Learning","authors":"Scott R. Klemmer","doi":"10.1145/3368377","DOIUrl":"https://doi.org/10.1145/3368377","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128424510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrivateTalk: Activating Voice Input with Hand-On-Mouth Gesture Detected by Bluetooth Earphones PrivateTalk:激活语音输入与手对嘴的手势检测蓝牙耳机
Yukang Yan, Chun Yu, Yingtian Shi, Minxing Xie
We introduce PrivateTalk, an on-body interaction technique that allows users to activate voice input by performing the Hand-On-Mouth gesture during speaking. The gesture is performed as a hand partially covering the mouth from one side. PrivateTalk provides two benefits simultaneously. First, it enhances privacy by reducing the spread of voice while also concealing the lip movements from the view of other people in the environment. Second, the simple gesture removes the need for speaking wake-up words and is more accessible than a physical/software button especially when the device is not in the user's hands. To recognize the Hand-On-Mouth gesture, we propose a novel sensing technique that leverages the difference of signals received by two Bluetooth earphones worn on the left and right ear. Our evaluation shows that the gesture can be accurately detected and users consistently like PrivateTalk and consider it intuitive and effective.
我们介绍PrivateTalk,这是一种身体互动技术,允许用户在说话时通过手对嘴的手势来激活语音输入。这个手势是用一只手从一边遮住嘴巴。PrivateTalk同时提供了两个好处。首先,它通过减少声音的传播来增强隐私,同时也隐藏了环境中其他人看不到的嘴唇运动。其次,简单的手势消除了说唤醒词的需要,比物理/软件按钮更容易访问,特别是当设备不在用户手中时。为了识别手对嘴手势,我们提出了一种新的传感技术,利用戴在左耳和右耳上的两个蓝牙耳机接收到的信号的差异。我们的评估表明,手势可以被准确地检测到,用户一致喜欢PrivateTalk,并认为它直观有效。
{"title":"PrivateTalk: Activating Voice Input with Hand-On-Mouth Gesture Detected by Bluetooth Earphones","authors":"Yukang Yan, Chun Yu, Yingtian Shi, Minxing Xie","doi":"10.1145/3332165.3347950","DOIUrl":"https://doi.org/10.1145/3332165.3347950","url":null,"abstract":"We introduce PrivateTalk, an on-body interaction technique that allows users to activate voice input by performing the Hand-On-Mouth gesture during speaking. The gesture is performed as a hand partially covering the mouth from one side. PrivateTalk provides two benefits simultaneously. First, it enhances privacy by reducing the spread of voice while also concealing the lip movements from the view of other people in the environment. Second, the simple gesture removes the need for speaking wake-up words and is more accessible than a physical/software button especially when the device is not in the user's hands. To recognize the Hand-On-Mouth gesture, we propose a novel sensing technique that leverages the difference of signals received by two Bluetooth earphones worn on the left and right ear. Our evaluation shows that the gesture can be accurately detected and users consistently like PrivateTalk and consider it intuitive and effective.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"84 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Session details: Session 8A: Sensing 会话详细信息:会话8A:传感
Gierad Laput
{"title":"Session details: Session 8A: Sensing","authors":"Gierad Laput","doi":"10.1145/3368383","DOIUrl":"https://doi.org/10.1145/3368383","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123381833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1