首页 > 最新文献

Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Bodystorming Human-Robot Interactions 人体风暴人机交互
David J. Porfirio, Evan Fisher, Allison Sauppé, Aws Albarghouthi, Bilge Mutlu
Designing and implementing human-robot interactions requires numerous skills, from having a rich understanding of social interactions and the capacity to articulate their subtle requirements, to the ability to then program a social robot with the many facets of such a complex interaction. Although designers are best suited to develop and implement these interactions due to their inherent understanding of the context and its requirements, these skills are a barrier to enabling designers to rapidly explore and prototype ideas: it is impractical for designers to also be experts on social interaction behaviors, and the technical challenges associated with programming a social robot are prohibitive. In this work, we introduce Synthé, which allows designers to act out, or bodystorm, multiple demonstrations of an interaction. These demonstrations are automatically captured and translated into prototypes for the design team using program synthesis. We evaluate Synthé in multiple design sessions involving pairs of designers bodystorming interactions and observing the resulting models on a robot. We build on the findings from these sessions to improve the capabilities of Synthé and demonstrate the use of these capabilities in a second design session.
设计和实现人机交互需要许多技能,从对社会交互的丰富理解和阐明其微妙需求的能力,到随后对具有如此复杂交互的许多方面的社交机器人进行编程的能力。虽然设计师最适合开发和实现这些交互,因为他们对环境和需求的固有理解,但这些技能是使设计师能够快速探索和原型化想法的障碍:设计师同时是社交交互行为方面的专家是不切实际的,与编程社交机器人相关的技术挑战是令人望而却步的。在这项工作中,我们介绍了synth,它允许设计师表演,或者身体风暴,多个交互演示。这些演示被自动捕获,并使用程序合成转换为设计团队的原型。我们在多个设计会议中评估synth,包括对设计师进行身体风暴互动并观察机器人上的最终模型。我们以这些会议的发现为基础,改进synth的功能,并在第二个设计会议中演示这些功能的使用。
{"title":"Bodystorming Human-Robot Interactions","authors":"David J. Porfirio, Evan Fisher, Allison Sauppé, Aws Albarghouthi, Bilge Mutlu","doi":"10.1145/3332165.3347957","DOIUrl":"https://doi.org/10.1145/3332165.3347957","url":null,"abstract":"Designing and implementing human-robot interactions requires numerous skills, from having a rich understanding of social interactions and the capacity to articulate their subtle requirements, to the ability to then program a social robot with the many facets of such a complex interaction. Although designers are best suited to develop and implement these interactions due to their inherent understanding of the context and its requirements, these skills are a barrier to enabling designers to rapidly explore and prototype ideas: it is impractical for designers to also be experts on social interaction behaviors, and the technical challenges associated with programming a social robot are prohibitive. In this work, we introduce Synthé, which allows designers to act out, or bodystorm, multiple demonstrations of an interaction. These demonstrations are automatically captured and translated into prototypes for the design team using program synthesis. We evaluate Synthé in multiple design sessions involving pairs of designers bodystorming interactions and observing the resulting models on a robot. We build on the findings from these sessions to improve the capabilities of Synthé and demonstrate the use of these capabilities in a second design session.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122867194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
The Memory Palace: Exploring Visual-Spatial Paths for Strong, Memorable, Infrequent Authentication 记忆宫殿:探索视觉空间路径为强,难忘,不常见的认证
Sauvik Das, David Lu, Taehoon Lee, Joanne Lo, Jason I. Hong
Many accounts and devices require only infrequent authentication by an individual, and thus authentication secrets should be both secure and memorable without much reinforcement. Inspired by people's strong visual-spatial memory, we introduce a novel system to help address this problem: the Memory Palace. The Memory Palace encodes authentication secrets as paths through a 3D virtual labyrinth navigated in the first-person perspective. We ran two experiments to iteratively design and evaluate the Memory Palace. In the first, we found that visual-spatial secrets are most memorable if navigated in a 3D first-person perspective. In the second, we comparatively evaluated the Memory Palace against Android's 9-dot pattern lock along three dimensions: memorability after one week, resilience to shoulder surfing, and speed. We found that relative to 9-dot, complexity-controlled secrets in the Memory Palace were significantly more memorable after one week, were much harder to break through shoulder surfing, and were not significantly slower to enter.
许多帐户和设备只需要很少的个人身份验证,因此身份验证秘密应该既安全又容易记忆,而不需要太多的强化。受人们强烈的视觉空间记忆的启发,我们引入了一个新颖的系统来帮助解决这个问题:记忆宫殿。记忆宫殿以第一人称视角将身份验证秘密编码为通过3D虚拟迷宫的路径。我们进行了两次实验来迭代设计和评估记忆宫殿。首先,我们发现,如果以3D第一人称视角浏览,视觉空间秘密最令人难忘。在第二部分中,我们从三个方面比较了Memory Palace与Android的9点模式锁:一周后的记忆能力、肩冲浪的弹性和速度。我们发现,相对于9点,记忆宫殿中复杂控制的秘密在一周后明显更容易被记住,更难以突破肩部冲浪,并且进入速度并没有明显变慢。
{"title":"The Memory Palace: Exploring Visual-Spatial Paths for Strong, Memorable, Infrequent Authentication","authors":"Sauvik Das, David Lu, Taehoon Lee, Joanne Lo, Jason I. Hong","doi":"10.1145/3332165.3347917","DOIUrl":"https://doi.org/10.1145/3332165.3347917","url":null,"abstract":"Many accounts and devices require only infrequent authentication by an individual, and thus authentication secrets should be both secure and memorable without much reinforcement. Inspired by people's strong visual-spatial memory, we introduce a novel system to help address this problem: the Memory Palace. The Memory Palace encodes authentication secrets as paths through a 3D virtual labyrinth navigated in the first-person perspective. We ran two experiments to iteratively design and evaluate the Memory Palace. In the first, we found that visual-spatial secrets are most memorable if navigated in a 3D first-person perspective. In the second, we comparatively evaluated the Memory Palace against Android's 9-dot pattern lock along three dimensions: memorability after one week, resilience to shoulder surfing, and speed. We found that relative to 9-dot, complexity-controlled secrets in the Memory Palace were significantly more memorable after one week, were much harder to break through shoulder surfing, and were not significantly slower to enter.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129778704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Unakite: Scaffolding Developers' Decision-Making Using the Web Unakite:脚手架开发人员的决策使用Web
Michael Xieyang Liu, Jane Hsieh, Nathan Hahn, Angelina Zhou, Emily Deng, Shaun Burley, C. Taylor, A. Kittur, B. Myers
Developers spend a significant portion of their time searching for solutions and methods online. While numerous tools have been developed to support this exploratory process, in many cases the answers to developers' questions involve trade-offs among multiple valid options and not just a single solution. Through interviews, we discovered that developers express a desire for help with decision-making and understanding trade-offs. Through an analysis of Stack Overflow posts, we observed that many answers describe such trade-offs. These findings suggest that tools designed to help a developer capture information and make decisions about trade-offs can provide crucial benefits for both the developers and others who want to understand their design rationale. In this work, we probe this hypothesis with a prototype system named Unakite that collects, organizes, and keeps track of information about trade-offs and builds a comparison table, which can be saved as a design rationale for later use. Our evaluation results show that Unakite reduces the cost of capturing tradeoff-related information by 45%, and that the resulting comparison table speeds up a subsequent developer's ability to understand the trade-offs by about a factor of three.
开发人员花费大量时间在线搜索解决方案和方法。虽然已经开发了许多工具来支持这种探索性过程,但在许多情况下,开发人员的问题的答案涉及多个有效选项之间的权衡,而不仅仅是单一的解决方案。通过访谈,我们发现开发人员表达了在决策和理解取舍方面获得帮助的愿望。通过对Stack Overflow帖子的分析,我们观察到许多答案都描述了这样的权衡。这些发现表明,旨在帮助开发人员获取信息并做出权衡决策的工具可以为开发人员和其他想要了解其设计原理的人提供关键的好处。在这项工作中,我们用一个名为Unakite的原型系统来探索这个假设,该系统收集、组织和跟踪有关权衡的信息,并构建一个比较表,可以保存为以后使用的设计基本原理。我们的评估结果表明,Unakite将捕获权衡相关信息的成本降低了45%,并且生成的比较表将后续开发人员理解权衡的能力提高了大约三倍。
{"title":"Unakite: Scaffolding Developers' Decision-Making Using the Web","authors":"Michael Xieyang Liu, Jane Hsieh, Nathan Hahn, Angelina Zhou, Emily Deng, Shaun Burley, C. Taylor, A. Kittur, B. Myers","doi":"10.1145/3332165.3347908","DOIUrl":"https://doi.org/10.1145/3332165.3347908","url":null,"abstract":"Developers spend a significant portion of their time searching for solutions and methods online. While numerous tools have been developed to support this exploratory process, in many cases the answers to developers' questions involve trade-offs among multiple valid options and not just a single solution. Through interviews, we discovered that developers express a desire for help with decision-making and understanding trade-offs. Through an analysis of Stack Overflow posts, we observed that many answers describe such trade-offs. These findings suggest that tools designed to help a developer capture information and make decisions about trade-offs can provide crucial benefits for both the developers and others who want to understand their design rationale. In this work, we probe this hypothesis with a prototype system named Unakite that collects, organizes, and keeps track of information about trade-offs and builds a comparison table, which can be saved as a design rationale for later use. Our evaluation results show that Unakite reduces the cost of capturing tradeoff-related information by 45%, and that the resulting comparison table speeds up a subsequent developer's ability to understand the trade-offs by about a factor of three.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122144416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Type, Then Correct: Intelligent Text Correction Techniques for Mobile Text Entry Using Neural Networks 输入,然后更正:使用神经网络的移动文本输入智能文本更正技术
M. Zhang, He Wen, J. Wobbrock
Current text correction processes on mobile touch devices are laborious: users either extensively use backspace, or navigate the cursor to the error position, make a correction, and navigate back, usually by employing multiple taps or drags over small targets. In this paper, we present three novel text correction techniques to improve the correction process: Drag-n-Drop, Drag-n-Throw, and Magic Key. All of the techniques skip error-deletion and cursor-positioning procedures, and instead allow the user to type the correction first, and then apply that correction to a previously committed error. Specifically, Drag-n-Drop allows a user to drag a correction and drop it on the error position. Drag-n-Throw lets a user drag a correction from the keyboard suggestion list and "throw" it to the approximate area of the error text, with a neural network determining the most likely error in that area. Magic Key allows a user to type a correction and tap a designated key to highlight possible error candidates, which are also determined by a neural network. The user can navigate among these candidates by directionally dragging from atop the key, and can apply the correction by simply tapping the key. We evaluated these techniques in both text correction and text composition tasks. Our results show that correction with the new techniques was faster than de facto cursor and backspace-based correction. Our techniques apply to any touch-based text entry method.
当前移动触摸设备上的文本更正过程非常费力:用户要么大量使用退格,要么将光标导航到错误位置,进行更正,然后再导航回来,通常是在小目标上进行多次点击或拖动。在本文中,我们提出了三种新的文本校正技术来改进校正过程:拖放,拖扔和魔术键。所有这些技术都跳过了错误删除和光标定位过程,而是允许用户先输入更正,然后将该更正应用于先前提交的错误。具体来说,“拖放”功能允许用户拖放更正并将其拖放到错误位置。drag -n- throw允许用户从键盘建议列表中拖动更正,并将其“扔”到错误文本的近似区域,由神经网络确定该区域中最可能出现的错误。Magic Key允许用户输入更正信息,并点击指定的键来突出显示可能的错误候选项,这些候选项也是由神经网络确定的。用户可以通过从键上方定向拖动来在这些候选项中导航,并且可以通过简单地点击键来应用校正。我们在文本纠错和文本组合任务中评估了这些技术。我们的结果表明,使用新技术的校正比实际的光标和基于退格的校正更快。我们的技术适用于任何基于触摸的文本输入方法。
{"title":"Type, Then Correct: Intelligent Text Correction Techniques for Mobile Text Entry Using Neural Networks","authors":"M. Zhang, He Wen, J. Wobbrock","doi":"10.1145/3332165.3347924","DOIUrl":"https://doi.org/10.1145/3332165.3347924","url":null,"abstract":"Current text correction processes on mobile touch devices are laborious: users either extensively use backspace, or navigate the cursor to the error position, make a correction, and navigate back, usually by employing multiple taps or drags over small targets. In this paper, we present three novel text correction techniques to improve the correction process: Drag-n-Drop, Drag-n-Throw, and Magic Key. All of the techniques skip error-deletion and cursor-positioning procedures, and instead allow the user to type the correction first, and then apply that correction to a previously committed error. Specifically, Drag-n-Drop allows a user to drag a correction and drop it on the error position. Drag-n-Throw lets a user drag a correction from the keyboard suggestion list and \"throw\" it to the approximate area of the error text, with a neural network determining the most likely error in that area. Magic Key allows a user to type a correction and tap a designated key to highlight possible error candidates, which are also determined by a neural network. The user can navigate among these candidates by directionally dragging from atop the key, and can apply the correction by simply tapping the key. We evaluated these techniques in both text correction and text composition tasks. Our results show that correction with the new techniques was faster than de facto cursor and backspace-based correction. Our techniques apply to any touch-based text entry method.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125475758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Session details: Session 6B: Haptics and Illusions 会议细节:6B部分:触觉和幻觉
Fraser Anderson
{"title":"Session details: Session 6B: Haptics and Illusions","authors":"Fraser Anderson","doi":"10.1145/3368380","DOIUrl":"https://doi.org/10.1145/3368380","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121484274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aero-plane: A Handheld Force-Feedback Device that Renders Weight Motion Illusion on a Virtual 2D Plane Aero-plane:一种手持力反馈设备,可在虚拟2D平面上呈现重量运动错觉
Seungwoo Je, Myung Jin Kim, Woojin Lee, Byungjoo Lee, Xing-Dong Yang, Pedro Lopes, Andrea Bianchi
Force feedback is said to be the next frontier in virtual reality (VR). Recently, with consumers pushing forward with untethered VR, researchers turned away from solutions based on bulky hardware (e.g., exoskeletons and robotic arms) and started exploring smaller portable or wearable devices. However, when it comes to rendering inertial forces, such as when moving a heavy object around or when interacting with objects with unique mass properties, current ungrounded force feedback devices are unable to provide quick weight shifting sensations that can realistically simulate weight changes over 2D surfaces. In this paper we introduce Aero-plane, a force-feedback handheld controller based on two miniature jet propellers that can render shifting weights of up to 14 N within 0.3 seconds. Through two user studies we: (1) characterize the users' ability to perceive and correctly recognize different motion paths on a virtual plane while using our device; and, (2) tested the level of realism and immersion of the controller when used in two VR applications (a rolling ball on a plane, and using kitchen tools of different shapes and sizes). Lastly, we present a set of applications that further explore different usage cases and alternative form-factors for our device.
力反馈据说是虚拟现实(VR)的下一个前沿领域。最近,随着消费者对不受束缚的VR的推动,研究人员放弃了基于笨重硬件(如外骨骼和机械臂)的解决方案,开始探索更小的便携式或可穿戴设备。然而,当涉及到渲染惯性力时,例如当移动重物或与具有独特质量特性的物体相互作用时,当前未接地的力反馈设备无法提供快速的重量变化感觉,从而无法真实地模拟2D表面上的重量变化。本文介绍了一种基于两个微型喷气螺旋桨的力反馈手持控制器Aero-plane,该控制器可以在0.3秒内实现高达14 N的重量转移。通过两项用户研究,我们:(1)描述用户在使用我们的设备时感知和正确识别虚拟平面上不同运动路径的能力;并且,(2)测试了在两个VR应用程序(在飞机上滚动的球,以及使用不同形状和大小的厨房工具)中使用控制器的现实性和沉浸感的水平。最后,我们提出了一组应用程序,进一步探索不同的使用案例和可供选择的形式因素为我们的设备。
{"title":"Aero-plane: A Handheld Force-Feedback Device that Renders Weight Motion Illusion on a Virtual 2D Plane","authors":"Seungwoo Je, Myung Jin Kim, Woojin Lee, Byungjoo Lee, Xing-Dong Yang, Pedro Lopes, Andrea Bianchi","doi":"10.1145/3332165.3347926","DOIUrl":"https://doi.org/10.1145/3332165.3347926","url":null,"abstract":"Force feedback is said to be the next frontier in virtual reality (VR). Recently, with consumers pushing forward with untethered VR, researchers turned away from solutions based on bulky hardware (e.g., exoskeletons and robotic arms) and started exploring smaller portable or wearable devices. However, when it comes to rendering inertial forces, such as when moving a heavy object around or when interacting with objects with unique mass properties, current ungrounded force feedback devices are unable to provide quick weight shifting sensations that can realistically simulate weight changes over 2D surfaces. In this paper we introduce Aero-plane, a force-feedback handheld controller based on two miniature jet propellers that can render shifting weights of up to 14 N within 0.3 seconds. Through two user studies we: (1) characterize the users' ability to perceive and correctly recognize different motion paths on a virtual plane while using our device; and, (2) tested the level of realism and immersion of the controller when used in two VR applications (a rolling ball on a plane, and using kitchen tools of different shapes and sizes). Lastly, we present a set of applications that further explore different usage cases and alternative form-factors for our device.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"155 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128709555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Session details: Session 2A: Augmented and Mixed Reality 会议详情:会议2A:增强和混合现实
R. Xiao
{"title":"Session details: Session 2A: Augmented and Mixed Reality","authors":"R. Xiao","doi":"10.1145/3368371","DOIUrl":"https://doi.org/10.1145/3368371","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133166046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sozu
Yang Zhang, Yasha Iravantchi, Haojian Jin, Swarun Kumar, Chris Harrison
Robust, wide-area sensing of human environments has been a long-standing research goal. We present Sozu, a new lowcost sensing system that can detect a wide range of events wirelessly, through walls and without line of sight, at wholebuilding scale. To achieve this in a battery-free manner, Sozu tags convert energy from activities that they sense into RF broadcasts, acting like miniature self-powered radio stations. We describe the results from a series of iterative studies, culminating in a deployment study with 30 instrumented objects. Results show that Sozu is very accurate, with true positive event detection exceeding 99%, with almost no false positives. Beyond event detection, we show that Sozu can be extended to detect richer signals, such as the state, intensity, count, and rate of events.
对人类环境进行稳健、广域的感知一直是一个长期的研究目标。我们展示了Sozu,一种新的低成本传感系统,可以在整个建筑范围内通过墙壁和无视线无线检测各种事件。为了以无电池的方式实现这一目标,Sozu标签将它们感知到的活动能量转换为射频广播,就像微型自供电无线电台一样。我们描述了一系列迭代研究的结果,最后是一个包含30个仪器对象的部署研究。结果表明,Sozu检测准确率高,真阳性事件检出率超过99%,几乎没有假阳性。除了事件检测之外,我们还展示了Sozu可以扩展到检测更丰富的信号,例如事件的状态、强度、计数和速率。
{"title":"Sozu","authors":"Yang Zhang, Yasha Iravantchi, Haojian Jin, Swarun Kumar, Chris Harrison","doi":"10.1145/3332165.3347952","DOIUrl":"https://doi.org/10.1145/3332165.3347952","url":null,"abstract":"Robust, wide-area sensing of human environments has been a long-standing research goal. We present Sozu, a new lowcost sensing system that can detect a wide range of events wirelessly, through walls and without line of sight, at wholebuilding scale. To achieve this in a battery-free manner, Sozu tags convert energy from activities that they sense into RF broadcasts, acting like miniature self-powered radio stations. We describe the results from a series of iterative studies, culminating in a deployment study with 30 instrumented objects. Results show that Sozu is very accurate, with true positive event detection exceeding 99%, with almost no false positives. Beyond event detection, we show that Sozu can be extended to detect richer signals, such as the state, intensity, count, and rate of events.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129401911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multi-Touch Kit: A Do-It-Yourself Technique for Capacitive Multi-Touch Sensing Using a Commodity Microcontroller 多点触控套件:一种使用商品微控制器的电容式多点触控传感的diy技术
Narjes Pourjafarian, A. Withana, J. Paradiso, Jürgen Steimle
Mutual capacitance-based multi-touch sensing is now a ubiquitous and high-fidelity input technology. However, due to the complexity of electrical and signal processing requirements, it remains very challenging to create interface prototypes with custom-designed multi-touch input surfaces. In this paper, we introduce Multi-Touch Kit, a technique enabling electronics novices to rapidly prototype customized capacitive multi-touch sensors. In contrast to existing techniques, it works with a commodity microcontroller and open-source software and does not require any specialized hardware. Evaluation results show that our approach enables multi-touch sensors with a high spatial and temporal resolution and can accurately detect multiple simultaneous touches. A set of application examples demonstrates the versatile uses of our approach for sensors of different scales, curvature, and materials.
基于互容的多点触控是目前普遍存在的高保真输入技术。然而,由于电子和信号处理要求的复杂性,使用定制设计的多点触摸输入表面创建界面原型仍然非常具有挑战性。在本文中,我们介绍了Multi-Touch Kit,一种使电子初学者能够快速原型定制电容式多点触摸传感器的技术。与现有技术相比,它可以使用商品微控制器和开源软件,不需要任何专门的硬件。评估结果表明,我们的方法使多点触摸传感器具有较高的空间和时间分辨率,并且可以准确地检测多个同时触摸。一组应用示例演示了我们的方法在不同尺度、曲率和材料的传感器上的多种用途。
{"title":"Multi-Touch Kit: A Do-It-Yourself Technique for Capacitive Multi-Touch Sensing Using a Commodity Microcontroller","authors":"Narjes Pourjafarian, A. Withana, J. Paradiso, Jürgen Steimle","doi":"10.1145/3332165.3347895","DOIUrl":"https://doi.org/10.1145/3332165.3347895","url":null,"abstract":"Mutual capacitance-based multi-touch sensing is now a ubiquitous and high-fidelity input technology. However, due to the complexity of electrical and signal processing requirements, it remains very challenging to create interface prototypes with custom-designed multi-touch input surfaces. In this paper, we introduce Multi-Touch Kit, a technique enabling electronics novices to rapidly prototype customized capacitive multi-touch sensors. In contrast to existing techniques, it works with a commodity microcontroller and open-source software and does not require any specialized hardware. Evaluation results show that our approach enables multi-touch sensors with a high spatial and temporal resolution and can accurately detect multiple simultaneous touches. A set of application examples demonstrates the versatile uses of our approach for sensors of different scales, curvature, and materials.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117317002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Resized Grasping in VR: Estimating Thresholds for Object Discrimination 在VR中调整抓取大小:估计物体识别阈值
Joanna Bergström, Aske Mottelson, Jarrod Knibbe
Previous work in VR has demonstrated how individual physical objects can represent multiple virtual objects in different locations by redirecting the user's hand. We show how individual objects can represent multiple virtual objects of different sizes by resizing the user's grasp. We redirect the positions of the user's fingers by visual translation gains, inducing an illusion that can make physical objects seem larger or smaller. We present a discrimination experiment to estimate the thresholds of resizing virtual objects from physical objects, without the user reliably noticing a difference. The results show that the size difference is easily detected when a physical object is used to represent an object less than 90% of its size. When physical objects represent larger virtual objects, however, then scaling is tightly coupled to the physical object's size: smaller physical objects allow more virtual resizing (up to a 50% larger virtual size). Resized Grasping considerably broadens the scope of using illusions to provide rich haptic experiences in virtual reality.
之前的VR工作已经展示了单个物理对象如何通过重定向用户的手来代表不同位置的多个虚拟对象。我们展示了单个对象如何通过调整用户的抓握大小来表示不同大小的多个虚拟对象。我们通过视觉转换增益来重定向用户手指的位置,从而产生一种错觉,使物理对象看起来更大或更小。我们提出了一个判别实验来估计从物理对象中调整虚拟对象大小的阈值,而不会使用户可靠地注意到差异。结果表明,当一个物理对象用来表示小于其大小90%的对象时,很容易检测到大小差异。然而,当物理对象表示较大的虚拟对象时,则缩放与物理对象的大小紧密耦合:较小的物理对象允许更多的虚拟调整大小(最多可将虚拟大小增加50%)。在虚拟现实中,调整大小抓取大大拓宽了使用错觉提供丰富触觉体验的范围。
{"title":"Resized Grasping in VR: Estimating Thresholds for Object Discrimination","authors":"Joanna Bergström, Aske Mottelson, Jarrod Knibbe","doi":"10.1145/3332165.3347939","DOIUrl":"https://doi.org/10.1145/3332165.3347939","url":null,"abstract":"Previous work in VR has demonstrated how individual physical objects can represent multiple virtual objects in different locations by redirecting the user's hand. We show how individual objects can represent multiple virtual objects of different sizes by resizing the user's grasp. We redirect the positions of the user's fingers by visual translation gains, inducing an illusion that can make physical objects seem larger or smaller. We present a discrimination experiment to estimate the thresholds of resizing virtual objects from physical objects, without the user reliably noticing a difference. The results show that the size difference is easily detected when a physical object is used to represent an object less than 90% of its size. When physical objects represent larger virtual objects, however, then scaling is tightly coupled to the physical object's size: smaller physical objects allow more virtual resizing (up to a 50% larger virtual size). Resized Grasping considerably broadens the scope of using illusions to provide rich haptic experiences in virtual reality.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115589513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
期刊
Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1