首页 > 最新文献

Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
ActiTouch
Yang Zhang, W. Kienzle, Yanjun Ma, Shiu S. Ng, Hrvoje Benko, Chris Harrison
Contemporary AR/VR systems use in-air gestures or handheld controllers for interactivity. This overlooks the skin as a convenient surface for tactile, touch-driven interactions, which are generally more accurate and comfortable than free space interactions. In response, we developed ActiTouch, a new electrical method that enables precise on-skin touch segmentation by using the body as an RF waveguide. We combine this method with computer vision, enabling a system with both high tracking precision and robust touch detection. Our system requires no cumbersome instrumentation of the fingers or hands, requiring only a single wristband (e.g., smartwatch) and sensors integrated into an AR/VR headset. We quantify the accuracy of our approach through a user study and demonstrate how it can enable touchscreen-like interactions on the skin. Author
当代AR/VR系统使用空中手势或手持控制器进行交互。这忽略了皮肤作为触觉的方便表面,触摸驱动的互动,这通常比自由空间的互动更准确和舒适。作为回应,我们开发了ActiTouch,这是一种新的电方法,通过使用身体作为射频波导来实现精确的皮肤触摸分割。我们将这种方法与计算机视觉相结合,使系统具有高跟踪精度和鲁棒性的触摸检测。我们的系统不需要手指或手的繁琐仪器,只需要一个腕带(例如智能手表)和集成到AR/VR耳机中的传感器。我们通过用户研究量化了我们方法的准确性,并演示了它如何在皮肤上实现类似触摸屏的交互。作者
{"title":"ActiTouch","authors":"Yang Zhang, W. Kienzle, Yanjun Ma, Shiu S. Ng, Hrvoje Benko, Chris Harrison","doi":"10.1145/3332165.3347869","DOIUrl":"https://doi.org/10.1145/3332165.3347869","url":null,"abstract":"Contemporary AR/VR systems use in-air gestures or handheld controllers for interactivity. This overlooks the skin as a convenient surface for tactile, touch-driven interactions, which are generally more accurate and comfortable than free space interactions. In response, we developed ActiTouch, a new electrical method that enables precise on-skin touch segmentation by using the body as an RF waveguide. We combine this method with computer vision, enabling a system with both high tracking precision and robust touch detection. Our system requires no cumbersome instrumentation of the fingers or hands, requiring only a single wristband (e.g., smartwatch) and sensors integrated into an AR/VR headset. We quantify the accuracy of our approach through a user study and demonstrate how it can enable touchscreen-like interactions on the skin. Author","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122589746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
FaceWidgets: Exploring Tangible Interaction on Face with Head-Mounted Displays FaceWidgets:用头戴式显示器探索有形的面部交互
Wen-Jie Tseng, Li-Yang Wang, Liwei Chan
We present FaceWidgets, a device integrated with the backside of a head-mounted display (HMD) that enables tangible interactions using physical controls. To allow for near range-to-eye interactions, our first study suggested displaying the virtual widgets at 20 cm from the eye positions, which is 9 cm from the HMD backside. We propose two novel interactions, widget canvas and palm-facing gesture, that can help users avoid double vision and allow them to access the interface as needed. Our second study showed that displaying a hand reference improved performance of face widgets interactions. We developed two applications of FaceWidgets, a fixed-layout 360 video player and a contextual input for smart home control. Finally, we compared four hand visualizations against the two applications in an exploratory study. Participants considered the transparent hand as the most suitable and responded positively to our system.
我们展示了FaceWidgets,这是一款与头戴式显示器(HMD)背面集成的设备,可以通过物理控制实现有形的交互。为了实现近距离与眼睛的交互,我们的第一项研究建议在距离眼睛20厘米的位置显示虚拟小部件,即距离头戴式显示器背面9厘米的位置。我们提出了两种新颖的交互方式,widget画布和面向手掌的手势,可以帮助用户避免双重视觉,并允许他们根据需要访问界面。我们的第二项研究表明,显示手部参考可以提高面部小部件交互的性能。我们开发了两个FaceWidgets应用程序,一个是固定布局的360度视频播放器,另一个是用于智能家居控制的上下文输入。最后,我们在一项探索性研究中比较了四种手部可视化与两种应用。嘉宾认为透明手是最合适的,并积极回应我们的制度。
{"title":"FaceWidgets: Exploring Tangible Interaction on Face with Head-Mounted Displays","authors":"Wen-Jie Tseng, Li-Yang Wang, Liwei Chan","doi":"10.1145/3332165.3347946","DOIUrl":"https://doi.org/10.1145/3332165.3347946","url":null,"abstract":"We present FaceWidgets, a device integrated with the backside of a head-mounted display (HMD) that enables tangible interactions using physical controls. To allow for near range-to-eye interactions, our first study suggested displaying the virtual widgets at 20 cm from the eye positions, which is 9 cm from the HMD backside. We propose two novel interactions, widget canvas and palm-facing gesture, that can help users avoid double vision and allow them to access the interface as needed. Our second study showed that displaying a hand reference improved performance of face widgets interactions. We developed two applications of FaceWidgets, a fixed-layout 360 video player and a contextual input for smart home control. Finally, we compared four hand visualizations against the two applications in an exploratory study. Participants considered the transparent hand as the most suitable and responded positively to our system.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"17 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124768069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality GhostAR:基于增强现实的人机协作任务具体化创作的时空编辑器
Yuanzhi Cao, Tianyi Wang, Xun Qian, P. S. Rao, M. Wadhawan, Ke Huo, K. Ramani
We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user's authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration. We emphasize an in-situ authoring and rapid iterations of joint plans without an offline training process. Further, we demonstrate and evaluate the effectiveness of our workflow through HRC use cases and a three-session user study.
我们提出了GhostAR,一个时空编辑器,用于在现场创作和执行人机协作(HRC)任务。我们的系统采用了增强现实(AR)中的具体化创作方法,通过示范角色扮演对机器人进行空间编辑和编程。我们提出了一种新的HRC工作流,将用户的创作外部化为演示和可编辑的AR幽灵,允许空间定位的视觉参考,逼真的动画模拟和协作行动指导。提出了一种基于动态时间规整(DTW)的协作模型,该模型将实时捕获的动作作为输入,将其映射到先前编写的人类动作,并输出相应的机器人动作,以实现自适应协作。我们强调在没有线下培训过程的情况下,现场编写和快速迭代联合计划。此外,我们通过HRC用例和三个会话的用户研究来演示和评估我们工作流程的有效性。
{"title":"GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality","authors":"Yuanzhi Cao, Tianyi Wang, Xun Qian, P. S. Rao, M. Wadhawan, Ke Huo, K. Ramani","doi":"10.1145/3332165.3347902","DOIUrl":"https://doi.org/10.1145/3332165.3347902","url":null,"abstract":"We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user's authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration. We emphasize an in-situ authoring and rapid iterations of joint plans without an offline training process. Further, we demonstrate and evaluate the effectiveness of our workflow through HRC use cases and a three-session user study.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133934280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Supporting Elder Connectedness through Cognitively Sustainable Design Interactions with the Memory Music Box 通过与记忆音乐盒的认知可持续设计互动来支持老年人的联系
Rébecca Kleinberger, Alexandra Rieger, Janelle Sands, Janet M. Baker
Isolation is one of the largest contributors to a lack of wellbeing, increased anxiety and loneliness in older adults. In collaboration with elders in living facilities, we designed the Memory Music Box; a low-threshold platform to increase connectedness. The HCI community has contributed notable research in support of elders through monitoring, tracking and memory augmentation. Despite the Information and Communication Technologies field (ICT) advances in providing new opportunities for connection, challenges in accessibility increase the gap between elders and their loved ones. We approach this challenge by embedding a familiar form factor with innovative applications, performing design evaluations with our key target group to incorporate multi-iteration learnings. These findings culminate in a novel design that facilitates elders in crossing technology and communication barriers. Based on these findings, we discuss how future inclusive technologies for the older adults' can balance ease of use, subtlety and elements of Cognitively Sustainable Design.
孤独是老年人缺乏幸福感、焦虑和孤独感增加的最大原因之一。我们与生活设施中的长者合作,设计了记忆音乐盒;一个低门槛的平台来增加连通性。在通过监测、跟踪和增强记忆来支持老年人方面,HCI社区做出了显著的贡献。尽管信息和通信技术领域在提供新的联系机会方面取得了进步,但无障碍方面的挑战扩大了老年人与其亲人之间的差距。我们通过将熟悉的形式因素嵌入到创新的应用程序中,与我们的关键目标组一起执行设计评估,以结合多次迭代学习来应对这一挑战。这些发现最终促成了一种新颖的设计,帮助老年人跨越技术和沟通障碍。基于这些发现,我们讨论了未来面向老年人的包容性技术如何在易用性、微妙性和认知可持续设计元素之间取得平衡。
{"title":"Supporting Elder Connectedness through Cognitively Sustainable Design Interactions with the Memory Music Box","authors":"Rébecca Kleinberger, Alexandra Rieger, Janelle Sands, Janet M. Baker","doi":"10.1145/3332165.3347877","DOIUrl":"https://doi.org/10.1145/3332165.3347877","url":null,"abstract":"Isolation is one of the largest contributors to a lack of wellbeing, increased anxiety and loneliness in older adults. In collaboration with elders in living facilities, we designed the Memory Music Box; a low-threshold platform to increase connectedness. The HCI community has contributed notable research in support of elders through monitoring, tracking and memory augmentation. Despite the Information and Communication Technologies field (ICT) advances in providing new opportunities for connection, challenges in accessibility increase the gap between elders and their loved ones. We approach this challenge by embedding a familiar form factor with innovative applications, performing design evaluations with our key target group to incorporate multi-iteration learnings. These findings culminate in a novel design that facilitates elders in crossing technology and communication barriers. Based on these findings, we discuss how future inclusive technologies for the older adults' can balance ease of use, subtlety and elements of Cognitively Sustainable Design.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134422240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Virtual Conferences 虚拟会议
C. Lopes
For the past 40 years, research communities have embraced a culture that relies heavily on physical meetings of people from around the world: we present our most important work in conferences, we meet our peers in conferences, and we even make life-long friends in conferences. Also at the same time, a broad scientific consensus has emerged that warns that human emissions of greenhouse gases are warming the earth. For many of us, travel to conferences may be a substantial or even dominant part of our individual contribution to climate change. A single round-trip flight from Paris to New Orleans emits the equivalent of about 2.5 tons of carbon dioxide (CO2e) per passenger, which is a significant fraction of the total yearly emissions for an average resident of the US or Europe. Moreover, these emissions have no near-term technological fix, since jet fuel is difficult to replace with renewable energy sources. In this talk, I want to first raise awareness of the conundrum we are in by relying so heavily in air travel for our work. I will present some of the possible solutions that go from adopting small, incremental changes to radical ones. The talk focuses one of the radical alternatives: virtual conferences. The technology for them is almost here and, for some time, I have been part of one community that organizes an annual conference in a virtual environment. Virtual conferences present many interesting challenges, some of them technological in nature, others that go beyond technology. Creating truly immersive conference experiences that make us feel "there" requires attention to personal and social experiences at physical conferences. Those experiences need to be recreated from the ground up in virtual spaces. But in that process, they can also be rethought to become experiences not possible in real life.
在过去的40年里,研究团体已经接受了一种文化,这种文化严重依赖于来自世界各地的人们的物理会议:我们在会议上展示我们最重要的工作,我们在会议上认识我们的同行,我们甚至在会议上结交一生的朋友。与此同时,一个广泛的科学共识已经出现,警告说人类排放的温室气体正在使地球变暖。对我们中的许多人来说,参加会议可能是我们个人对气候变化贡献的重要甚至主要部分。从巴黎到新奥尔良的一次往返航班,相当于每位乘客排放约2.5吨二氧化碳(CO2e),这是美国或欧洲普通居民年总排放量的很大一部分。此外,由于航空燃料很难被可再生能源取代,这些排放在短期内没有技术上的解决办法。在这次演讲中,我想首先提高人们对我们所处的难题的认识,因为我们的工作如此依赖航空旅行。我将提出一些可能的解决方案,从采用小的、渐进的改变到彻底的改变。这次演讲的重点是一种激进的替代方案:虚拟会议。他们的技术几乎就在这里,一段时间以来,我一直是一个在虚拟环境中组织年度会议的社区的一部分。虚拟会议提出了许多有趣的挑战,其中一些本质上是技术上的,另一些则超越了技术。创造真正身临其境的会议体验,让我们有“身临其境”的感觉,需要关注物理会议中的个人和社交体验。这些体验需要在虚拟空间中从头开始重现。但在这个过程中,它们也可以被重新思考,成为现实生活中不可能出现的体验。
{"title":"Virtual Conferences","authors":"C. Lopes","doi":"10.1145/3332165.3348236","DOIUrl":"https://doi.org/10.1145/3332165.3348236","url":null,"abstract":"For the past 40 years, research communities have embraced a culture that relies heavily on physical meetings of people from around the world: we present our most important work in conferences, we meet our peers in conferences, and we even make life-long friends in conferences. Also at the same time, a broad scientific consensus has emerged that warns that human emissions of greenhouse gases are warming the earth. For many of us, travel to conferences may be a substantial or even dominant part of our individual contribution to climate change. A single round-trip flight from Paris to New Orleans emits the equivalent of about 2.5 tons of carbon dioxide (CO2e) per passenger, which is a significant fraction of the total yearly emissions for an average resident of the US or Europe. Moreover, these emissions have no near-term technological fix, since jet fuel is difficult to replace with renewable energy sources. In this talk, I want to first raise awareness of the conundrum we are in by relying so heavily in air travel for our work. I will present some of the possible solutions that go from adopting small, incremental changes to radical ones. The talk focuses one of the radical alternatives: virtual conferences. The technology for them is almost here and, for some time, I have been part of one community that organizes an annual conference in a virtual environment. Virtual conferences present many interesting challenges, some of them technological in nature, others that go beyond technology. Creating truly immersive conference experiences that make us feel \"there\" requires attention to personal and social experiences at physical conferences. Those experiences need to be recreated from the ground up in virtual spaces. But in that process, they can also be rethought to become experiences not possible in real life.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134479156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
LeviProps LeviProps
Rafael Morales, A. Marzo, Sriram Subramanian, Diego Martínez
LeviProps are tangible structures used to create interactive mid-air experiences. They are composed of an acoustically- transparent lightweight piece of fabric and attached beads that act as levitated anchors. This combination enables real- time 6 Degrees-of-Freedom control of levitated structures which are larger and more diverse than those possible with previous acoustic manipulation techniques. LeviProps can be used as free-form interactive elements and as projection surfaces. We developed an authoring tool to support the creation of LeviProps. Our tool considers the outline of the prop and the user constraints to compute the optimum locations for the anchors (i.e. maximizing trapping forces), increasing prop stability and maximum size. The tool produces a final LeviProp design which can be fabricated following a simple procedure. This paper explains and evaluates our approach and showcases example applications, such as interactive storytelling, games and mid-air displays.
{"title":"LeviProps","authors":"Rafael Morales, A. Marzo, Sriram Subramanian, Diego Martínez","doi":"10.1145/3332165.3347882","DOIUrl":"https://doi.org/10.1145/3332165.3347882","url":null,"abstract":"LeviProps are tangible structures used to create interactive mid-air experiences. They are composed of an acoustically- transparent lightweight piece of fabric and attached beads that act as levitated anchors. This combination enables real- time 6 Degrees-of-Freedom control of levitated structures which are larger and more diverse than those possible with previous acoustic manipulation techniques. LeviProps can be used as free-form interactive elements and as projection surfaces. We developed an authoring tool to support the creation of LeviProps. Our tool considers the outline of the prop and the user constraints to compute the optimum locations for the anchors (i.e. maximizing trapping forces), increasing prop stability and maximum size. The tool produces a final LeviProp design which can be fabricated following a simple procedure. This paper explains and evaluates our approach and showcases example applications, such as interactive storytelling, games and mid-air displays.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127841591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Context-Aware Online Adaptation of Mixed Reality Interfaces 混合现实界面的上下文感知在线适应
David Lindlbauer, A. Feit, Otmar Hilliges
We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.
我们为混合现实(MR)系统提出了一种基于优化的方法,以自动控制何时何地显示应用程序,以及显示多少信息。目前,内容创建者设计应用程序,然后用户手动调整哪些应用程序是可见的,以及它们显示多少信息。每次用户切换上下文时,也就是说,每当他们切换任务或环境时,都必须调整这个选择。由于上下文切换每天发生很多次,我们认为MR接口需要自动化来缓解这个问题。我们提出了一种基于用户当前认知负荷和对其任务和环境的了解的实时方法来自动化这一过程。我们的系统可以调整显示哪些应用程序、显示多少信息以及它们的位置。我们将该问题表述为基于规则的决策和组合优化的混合,可以有效地实时解决。我们提出了一组概念验证应用程序,表明我们的方法适用于广泛的场景。最后,我们在双任务评估中表明,我们的方法减少了36%的次要任务交互。
{"title":"Context-Aware Online Adaptation of Mixed Reality Interfaces","authors":"David Lindlbauer, A. Feit, Otmar Hilliges","doi":"10.1145/3332165.3347945","DOIUrl":"https://doi.org/10.1145/3332165.3347945","url":null,"abstract":"We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124554949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight Mise-Unseen:使用眼动追踪隐藏虚拟现实场景变化
Sebastian Marwecki, Andrew D. Wilson, E. Ofek, Mar González-Franco, Christian Holz
Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.
在许多虚拟现实应用程序中,需要在运行时创建或安排对象,但是当这些变化发生在用户的视野中时,就会被注意到。我们提出了一种软件系统,可以在用户的视野内隐蔽地应用这种场景变化。Mise-Unseen利用凝视跟踪来创建用户注意力、意图和空间记忆模型,以确定是否以及何时注入变化。我们提出了7个应用程序,以不明显地修改视图内的场景(i)隐藏任务难度是适应用户的,(ii)根据用户的偏好调整体验,(iii)使用低保真效果的时间,(iv)即使在缺乏物理道具的情况下检测用户对被动触觉的选择,(v)在缺乏物理空间的情况下维持物理运动,(vi)在虚拟运动期间减少晕动病。(vii)在故事发展过程中验证用户的理解。我们在一项有15名参与者的用户研究中评估了Mise-Unseen和我们的应用程序,发现虽然凝视数据确实支持视野内的模糊变化,但通过使用凝视与常见掩蔽技术相结合,变化不会被察觉。
{"title":"Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight","authors":"Sebastian Marwecki, Andrew D. Wilson, E. Ofek, Mar González-Franco, Christian Holz","doi":"10.1145/3332165.3347919","DOIUrl":"https://doi.org/10.1145/3332165.3347919","url":null,"abstract":"Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123313809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Turn-by-Wire: Computationally Mediated Physical Fabrication 电转:计算介导的物理制造
Rundong Tian, V. Saran, Mareike Kritzler, F. Michahelles, E. Paulos
Advances in digital fabrication have simultaneously created new capabilities while reinforcing outdated workflows that constrain how, and by whom, these fabrication tools are used. In this paper, we investigate how a new class of hybrid-controlled machines can collaborate with novice and expert users alike to yield a more lucid making experience. We demonstrate these ideas through our system, Turn-by-Wire. By combining the capabilities of a traditional lathe with haptic input controllers that modulate both position and force, we detail a series of novel interaction metaphors that invite a more fluid making process spanning digital, model-centric, computer control, and embodied, adaptive, human control. We evaluate our system through a user study and discuss how these concepts generalize to other fabrication tools.
数字制造的进步在创造新功能的同时,也强化了过时的工作流程,这些工作流程限制了这些制造工具的使用方式和使用对象。在本文中,我们研究了一类新的混合控制机器如何与新手和专家用户合作,以产生更清晰的制作体验。我们通过Turn-by-Wire系统展示了这些想法。通过将传统车床的功能与调节位置和力的触觉输入控制器相结合,我们详细介绍了一系列新颖的交互隐喻,这些隐喻邀请了一个更流畅的制造过程,涵盖数字,以模型为中心,计算机控制和具体化,自适应,人类控制。我们通过用户研究来评估我们的系统,并讨论如何将这些概念推广到其他制造工具。
{"title":"Turn-by-Wire: Computationally Mediated Physical Fabrication","authors":"Rundong Tian, V. Saran, Mareike Kritzler, F. Michahelles, E. Paulos","doi":"10.1145/3332165.3347918","DOIUrl":"https://doi.org/10.1145/3332165.3347918","url":null,"abstract":"Advances in digital fabrication have simultaneously created new capabilities while reinforcing outdated workflows that constrain how, and by whom, these fabrication tools are used. In this paper, we investigate how a new class of hybrid-controlled machines can collaborate with novice and expert users alike to yield a more lucid making experience. We demonstrate these ideas through our system, Turn-by-Wire. By combining the capabilities of a traditional lathe with haptic input controllers that modulate both position and force, we detail a series of novel interaction metaphors that invite a more fluid making process spanning digital, model-centric, computer control, and embodied, adaptive, human control. We evaluate our system through a user study and discuss how these concepts generalize to other fabrication tools.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124226133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
X-Droid: A Quick and Easy Android Prototyping Framework with a Single-App Illusion X-Droid:一个快速和简单的Android原型框架与单一应用的错觉
Donghwi Kim, Sooyoung Park, Jihoon Ko, Steven Y. Ko, Sung-ju Lee
We present X-Droid, a framework that provides Android app developers an ability to quickly and easily produce functional prototypes. Our work is motivated by the need for such ability and the lack of tools that provide it. Developers want to produce a functional prototype rapidly to test out potential features in real-life situations. However, current prototyping tools for mobile apps are limited to creating non-functional UI mockups that do not demonstrate actual features. With X-Droid, developers can create a new app that imports various kinds of functionality provided by other existing Android apps. In doing so, developers do not need to understand how other Android apps are implemented or need access to their source code. X-Droid provides a developer tool that enables developers to use the UIs of other Android apps and import desired functions into their prototypes. X-Droid also provides a run-time system that executes other apps' functionality in the background on off-the-shelf Android devices for seamless integration. Our evaluation shows that with the help of X-Droid, a developer imported a function from an existing Android app into a new prototype with only 51 lines of Java code, while the function itself requires 10,334 lines of Java code to implement (i.e., 200× improvement).
我们介绍X-Droid,一个框架,为Android应用程序开发人员提供快速,轻松地产生功能原型的能力。我们的工作是由对这种能力的需求和缺乏提供这种能力的工具所驱动的。开发人员希望快速生产出功能原型,以便在现实生活中测试潜在的特性。然而,目前用于移动应用的原型工具仅限于创建没有实际功能的UI模型。有了X-Droid,开发者可以创建一个新的应用程序,导入其他现有Android应用程序提供的各种功能。这样,开发者就不需要了解其他Android应用是如何实现的,也不需要访问它们的源代码。X-Droid提供了一个开发工具,使开发人员能够使用其他Android应用程序的ui,并将所需的功能导入到他们的原型中。X-Droid还提供了一个运行时系统,可以在现成的Android设备上后台执行其他应用程序的功能,实现无缝集成。我们的评估表明,在X-Droid的帮助下,开发人员只需51行Java代码就可以将现有Android应用程序中的功能导入到新原型中,而该功能本身需要10,334行Java代码来实现(即200倍的改进)。
{"title":"X-Droid: A Quick and Easy Android Prototyping Framework with a Single-App Illusion","authors":"Donghwi Kim, Sooyoung Park, Jihoon Ko, Steven Y. Ko, Sung-ju Lee","doi":"10.1145/3332165.3347890","DOIUrl":"https://doi.org/10.1145/3332165.3347890","url":null,"abstract":"We present X-Droid, a framework that provides Android app developers an ability to quickly and easily produce functional prototypes. Our work is motivated by the need for such ability and the lack of tools that provide it. Developers want to produce a functional prototype rapidly to test out potential features in real-life situations. However, current prototyping tools for mobile apps are limited to creating non-functional UI mockups that do not demonstrate actual features. With X-Droid, developers can create a new app that imports various kinds of functionality provided by other existing Android apps. In doing so, developers do not need to understand how other Android apps are implemented or need access to their source code. X-Droid provides a developer tool that enables developers to use the UIs of other Android apps and import desired functions into their prototypes. X-Droid also provides a run-time system that executes other apps' functionality in the background on off-the-shelf Android devices for seamless integration. Our evaluation shows that with the help of X-Droid, a developer imported a function from an existing Android app into a new prototype with only 51 lines of Java code, while the function itself requires 10,334 lines of Java code to implement (i.e., 200× improvement).","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130953536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1