David J. Porfirio, Evan Fisher, Allison Sauppé, Aws Albarghouthi, Bilge Mutlu
Designing and implementing human-robot interactions requires numerous skills, from having a rich understanding of social interactions and the capacity to articulate their subtle requirements, to the ability to then program a social robot with the many facets of such a complex interaction. Although designers are best suited to develop and implement these interactions due to their inherent understanding of the context and its requirements, these skills are a barrier to enabling designers to rapidly explore and prototype ideas: it is impractical for designers to also be experts on social interaction behaviors, and the technical challenges associated with programming a social robot are prohibitive. In this work, we introduce Synthé, which allows designers to act out, or bodystorm, multiple demonstrations of an interaction. These demonstrations are automatically captured and translated into prototypes for the design team using program synthesis. We evaluate Synthé in multiple design sessions involving pairs of designers bodystorming interactions and observing the resulting models on a robot. We build on the findings from these sessions to improve the capabilities of Synthé and demonstrate the use of these capabilities in a second design session.
{"title":"Bodystorming Human-Robot Interactions","authors":"David J. Porfirio, Evan Fisher, Allison Sauppé, Aws Albarghouthi, Bilge Mutlu","doi":"10.1145/3332165.3347957","DOIUrl":"https://doi.org/10.1145/3332165.3347957","url":null,"abstract":"Designing and implementing human-robot interactions requires numerous skills, from having a rich understanding of social interactions and the capacity to articulate their subtle requirements, to the ability to then program a social robot with the many facets of such a complex interaction. Although designers are best suited to develop and implement these interactions due to their inherent understanding of the context and its requirements, these skills are a barrier to enabling designers to rapidly explore and prototype ideas: it is impractical for designers to also be experts on social interaction behaviors, and the technical challenges associated with programming a social robot are prohibitive. In this work, we introduce Synthé, which allows designers to act out, or bodystorm, multiple demonstrations of an interaction. These demonstrations are automatically captured and translated into prototypes for the design team using program synthesis. We evaluate Synthé in multiple design sessions involving pairs of designers bodystorming interactions and observing the resulting models on a robot. We build on the findings from these sessions to improve the capabilities of Synthé and demonstrate the use of these capabilities in a second design session.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122867194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sauvik Das, David Lu, Taehoon Lee, Joanne Lo, Jason I. Hong
Many accounts and devices require only infrequent authentication by an individual, and thus authentication secrets should be both secure and memorable without much reinforcement. Inspired by people's strong visual-spatial memory, we introduce a novel system to help address this problem: the Memory Palace. The Memory Palace encodes authentication secrets as paths through a 3D virtual labyrinth navigated in the first-person perspective. We ran two experiments to iteratively design and evaluate the Memory Palace. In the first, we found that visual-spatial secrets are most memorable if navigated in a 3D first-person perspective. In the second, we comparatively evaluated the Memory Palace against Android's 9-dot pattern lock along three dimensions: memorability after one week, resilience to shoulder surfing, and speed. We found that relative to 9-dot, complexity-controlled secrets in the Memory Palace were significantly more memorable after one week, were much harder to break through shoulder surfing, and were not significantly slower to enter.
{"title":"The Memory Palace: Exploring Visual-Spatial Paths for Strong, Memorable, Infrequent Authentication","authors":"Sauvik Das, David Lu, Taehoon Lee, Joanne Lo, Jason I. Hong","doi":"10.1145/3332165.3347917","DOIUrl":"https://doi.org/10.1145/3332165.3347917","url":null,"abstract":"Many accounts and devices require only infrequent authentication by an individual, and thus authentication secrets should be both secure and memorable without much reinforcement. Inspired by people's strong visual-spatial memory, we introduce a novel system to help address this problem: the Memory Palace. The Memory Palace encodes authentication secrets as paths through a 3D virtual labyrinth navigated in the first-person perspective. We ran two experiments to iteratively design and evaluate the Memory Palace. In the first, we found that visual-spatial secrets are most memorable if navigated in a 3D first-person perspective. In the second, we comparatively evaluated the Memory Palace against Android's 9-dot pattern lock along three dimensions: memorability after one week, resilience to shoulder surfing, and speed. We found that relative to 9-dot, complexity-controlled secrets in the Memory Palace were significantly more memorable after one week, were much harder to break through shoulder surfing, and were not significantly slower to enter.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129778704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Xieyang Liu, Jane Hsieh, Nathan Hahn, Angelina Zhou, Emily Deng, Shaun Burley, C. Taylor, A. Kittur, B. Myers
Developers spend a significant portion of their time searching for solutions and methods online. While numerous tools have been developed to support this exploratory process, in many cases the answers to developers' questions involve trade-offs among multiple valid options and not just a single solution. Through interviews, we discovered that developers express a desire for help with decision-making and understanding trade-offs. Through an analysis of Stack Overflow posts, we observed that many answers describe such trade-offs. These findings suggest that tools designed to help a developer capture information and make decisions about trade-offs can provide crucial benefits for both the developers and others who want to understand their design rationale. In this work, we probe this hypothesis with a prototype system named Unakite that collects, organizes, and keeps track of information about trade-offs and builds a comparison table, which can be saved as a design rationale for later use. Our evaluation results show that Unakite reduces the cost of capturing tradeoff-related information by 45%, and that the resulting comparison table speeds up a subsequent developer's ability to understand the trade-offs by about a factor of three.
{"title":"Unakite: Scaffolding Developers' Decision-Making Using the Web","authors":"Michael Xieyang Liu, Jane Hsieh, Nathan Hahn, Angelina Zhou, Emily Deng, Shaun Burley, C. Taylor, A. Kittur, B. Myers","doi":"10.1145/3332165.3347908","DOIUrl":"https://doi.org/10.1145/3332165.3347908","url":null,"abstract":"Developers spend a significant portion of their time searching for solutions and methods online. While numerous tools have been developed to support this exploratory process, in many cases the answers to developers' questions involve trade-offs among multiple valid options and not just a single solution. Through interviews, we discovered that developers express a desire for help with decision-making and understanding trade-offs. Through an analysis of Stack Overflow posts, we observed that many answers describe such trade-offs. These findings suggest that tools designed to help a developer capture information and make decisions about trade-offs can provide crucial benefits for both the developers and others who want to understand their design rationale. In this work, we probe this hypothesis with a prototype system named Unakite that collects, organizes, and keeps track of information about trade-offs and builds a comparison table, which can be saved as a design rationale for later use. Our evaluation results show that Unakite reduces the cost of capturing tradeoff-related information by 45%, and that the resulting comparison table speeds up a subsequent developer's ability to understand the trade-offs by about a factor of three.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122144416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current text correction processes on mobile touch devices are laborious: users either extensively use backspace, or navigate the cursor to the error position, make a correction, and navigate back, usually by employing multiple taps or drags over small targets. In this paper, we present three novel text correction techniques to improve the correction process: Drag-n-Drop, Drag-n-Throw, and Magic Key. All of the techniques skip error-deletion and cursor-positioning procedures, and instead allow the user to type the correction first, and then apply that correction to a previously committed error. Specifically, Drag-n-Drop allows a user to drag a correction and drop it on the error position. Drag-n-Throw lets a user drag a correction from the keyboard suggestion list and "throw" it to the approximate area of the error text, with a neural network determining the most likely error in that area. Magic Key allows a user to type a correction and tap a designated key to highlight possible error candidates, which are also determined by a neural network. The user can navigate among these candidates by directionally dragging from atop the key, and can apply the correction by simply tapping the key. We evaluated these techniques in both text correction and text composition tasks. Our results show that correction with the new techniques was faster than de facto cursor and backspace-based correction. Our techniques apply to any touch-based text entry method.
{"title":"Type, Then Correct: Intelligent Text Correction Techniques for Mobile Text Entry Using Neural Networks","authors":"M. Zhang, He Wen, J. Wobbrock","doi":"10.1145/3332165.3347924","DOIUrl":"https://doi.org/10.1145/3332165.3347924","url":null,"abstract":"Current text correction processes on mobile touch devices are laborious: users either extensively use backspace, or navigate the cursor to the error position, make a correction, and navigate back, usually by employing multiple taps or drags over small targets. In this paper, we present three novel text correction techniques to improve the correction process: Drag-n-Drop, Drag-n-Throw, and Magic Key. All of the techniques skip error-deletion and cursor-positioning procedures, and instead allow the user to type the correction first, and then apply that correction to a previously committed error. Specifically, Drag-n-Drop allows a user to drag a correction and drop it on the error position. Drag-n-Throw lets a user drag a correction from the keyboard suggestion list and \"throw\" it to the approximate area of the error text, with a neural network determining the most likely error in that area. Magic Key allows a user to type a correction and tap a designated key to highlight possible error candidates, which are also determined by a neural network. The user can navigate among these candidates by directionally dragging from atop the key, and can apply the correction by simply tapping the key. We evaluated these techniques in both text correction and text composition tasks. Our results show that correction with the new techniques was faster than de facto cursor and backspace-based correction. Our techniques apply to any touch-based text entry method.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125475758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 6B: Haptics and Illusions","authors":"Fraser Anderson","doi":"10.1145/3368380","DOIUrl":"https://doi.org/10.1145/3368380","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121484274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seungwoo Je, Myung Jin Kim, Woojin Lee, Byungjoo Lee, Xing-Dong Yang, Pedro Lopes, Andrea Bianchi
Force feedback is said to be the next frontier in virtual reality (VR). Recently, with consumers pushing forward with untethered VR, researchers turned away from solutions based on bulky hardware (e.g., exoskeletons and robotic arms) and started exploring smaller portable or wearable devices. However, when it comes to rendering inertial forces, such as when moving a heavy object around or when interacting with objects with unique mass properties, current ungrounded force feedback devices are unable to provide quick weight shifting sensations that can realistically simulate weight changes over 2D surfaces. In this paper we introduce Aero-plane, a force-feedback handheld controller based on two miniature jet propellers that can render shifting weights of up to 14 N within 0.3 seconds. Through two user studies we: (1) characterize the users' ability to perceive and correctly recognize different motion paths on a virtual plane while using our device; and, (2) tested the level of realism and immersion of the controller when used in two VR applications (a rolling ball on a plane, and using kitchen tools of different shapes and sizes). Lastly, we present a set of applications that further explore different usage cases and alternative form-factors for our device.
{"title":"Aero-plane: A Handheld Force-Feedback Device that Renders Weight Motion Illusion on a Virtual 2D Plane","authors":"Seungwoo Je, Myung Jin Kim, Woojin Lee, Byungjoo Lee, Xing-Dong Yang, Pedro Lopes, Andrea Bianchi","doi":"10.1145/3332165.3347926","DOIUrl":"https://doi.org/10.1145/3332165.3347926","url":null,"abstract":"Force feedback is said to be the next frontier in virtual reality (VR). Recently, with consumers pushing forward with untethered VR, researchers turned away from solutions based on bulky hardware (e.g., exoskeletons and robotic arms) and started exploring smaller portable or wearable devices. However, when it comes to rendering inertial forces, such as when moving a heavy object around or when interacting with objects with unique mass properties, current ungrounded force feedback devices are unable to provide quick weight shifting sensations that can realistically simulate weight changes over 2D surfaces. In this paper we introduce Aero-plane, a force-feedback handheld controller based on two miniature jet propellers that can render shifting weights of up to 14 N within 0.3 seconds. Through two user studies we: (1) characterize the users' ability to perceive and correctly recognize different motion paths on a virtual plane while using our device; and, (2) tested the level of realism and immersion of the controller when used in two VR applications (a rolling ball on a plane, and using kitchen tools of different shapes and sizes). Lastly, we present a set of applications that further explore different usage cases and alternative form-factors for our device.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"155 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128709555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 2A: Augmented and Mixed Reality","authors":"R. Xiao","doi":"10.1145/3368371","DOIUrl":"https://doi.org/10.1145/3368371","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133166046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Zhang, Yasha Iravantchi, Haojian Jin, Swarun Kumar, Chris Harrison
Robust, wide-area sensing of human environments has been a long-standing research goal. We present Sozu, a new lowcost sensing system that can detect a wide range of events wirelessly, through walls and without line of sight, at wholebuilding scale. To achieve this in a battery-free manner, Sozu tags convert energy from activities that they sense into RF broadcasts, acting like miniature self-powered radio stations. We describe the results from a series of iterative studies, culminating in a deployment study with 30 instrumented objects. Results show that Sozu is very accurate, with true positive event detection exceeding 99%, with almost no false positives. Beyond event detection, we show that Sozu can be extended to detect richer signals, such as the state, intensity, count, and rate of events.
{"title":"Sozu","authors":"Yang Zhang, Yasha Iravantchi, Haojian Jin, Swarun Kumar, Chris Harrison","doi":"10.1145/3332165.3347952","DOIUrl":"https://doi.org/10.1145/3332165.3347952","url":null,"abstract":"Robust, wide-area sensing of human environments has been a long-standing research goal. We present Sozu, a new lowcost sensing system that can detect a wide range of events wirelessly, through walls and without line of sight, at wholebuilding scale. To achieve this in a battery-free manner, Sozu tags convert energy from activities that they sense into RF broadcasts, acting like miniature self-powered radio stations. We describe the results from a series of iterative studies, culminating in a deployment study with 30 instrumented objects. Results show that Sozu is very accurate, with true positive event detection exceeding 99%, with almost no false positives. Beyond event detection, we show that Sozu can be extended to detect richer signals, such as the state, intensity, count, and rate of events.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129401911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Narjes Pourjafarian, A. Withana, J. Paradiso, Jürgen Steimle
Mutual capacitance-based multi-touch sensing is now a ubiquitous and high-fidelity input technology. However, due to the complexity of electrical and signal processing requirements, it remains very challenging to create interface prototypes with custom-designed multi-touch input surfaces. In this paper, we introduce Multi-Touch Kit, a technique enabling electronics novices to rapidly prototype customized capacitive multi-touch sensors. In contrast to existing techniques, it works with a commodity microcontroller and open-source software and does not require any specialized hardware. Evaluation results show that our approach enables multi-touch sensors with a high spatial and temporal resolution and can accurately detect multiple simultaneous touches. A set of application examples demonstrates the versatile uses of our approach for sensors of different scales, curvature, and materials.
{"title":"Multi-Touch Kit: A Do-It-Yourself Technique for Capacitive Multi-Touch Sensing Using a Commodity Microcontroller","authors":"Narjes Pourjafarian, A. Withana, J. Paradiso, Jürgen Steimle","doi":"10.1145/3332165.3347895","DOIUrl":"https://doi.org/10.1145/3332165.3347895","url":null,"abstract":"Mutual capacitance-based multi-touch sensing is now a ubiquitous and high-fidelity input technology. However, due to the complexity of electrical and signal processing requirements, it remains very challenging to create interface prototypes with custom-designed multi-touch input surfaces. In this paper, we introduce Multi-Touch Kit, a technique enabling electronics novices to rapidly prototype customized capacitive multi-touch sensors. In contrast to existing techniques, it works with a commodity microcontroller and open-source software and does not require any specialized hardware. Evaluation results show that our approach enables multi-touch sensors with a high spatial and temporal resolution and can accurately detect multiple simultaneous touches. A set of application examples demonstrates the versatile uses of our approach for sensors of different scales, curvature, and materials.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117317002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous work in VR has demonstrated how individual physical objects can represent multiple virtual objects in different locations by redirecting the user's hand. We show how individual objects can represent multiple virtual objects of different sizes by resizing the user's grasp. We redirect the positions of the user's fingers by visual translation gains, inducing an illusion that can make physical objects seem larger or smaller. We present a discrimination experiment to estimate the thresholds of resizing virtual objects from physical objects, without the user reliably noticing a difference. The results show that the size difference is easily detected when a physical object is used to represent an object less than 90% of its size. When physical objects represent larger virtual objects, however, then scaling is tightly coupled to the physical object's size: smaller physical objects allow more virtual resizing (up to a 50% larger virtual size). Resized Grasping considerably broadens the scope of using illusions to provide rich haptic experiences in virtual reality.
{"title":"Resized Grasping in VR: Estimating Thresholds for Object Discrimination","authors":"Joanna Bergström, Aske Mottelson, Jarrod Knibbe","doi":"10.1145/3332165.3347939","DOIUrl":"https://doi.org/10.1145/3332165.3347939","url":null,"abstract":"Previous work in VR has demonstrated how individual physical objects can represent multiple virtual objects in different locations by redirecting the user's hand. We show how individual objects can represent multiple virtual objects of different sizes by resizing the user's grasp. We redirect the positions of the user's fingers by visual translation gains, inducing an illusion that can make physical objects seem larger or smaller. We present a discrimination experiment to estimate the thresholds of resizing virtual objects from physical objects, without the user reliably noticing a difference. The results show that the size difference is easily detected when a physical object is used to represent an object less than 90% of its size. When physical objects represent larger virtual objects, however, then scaling is tightly coupled to the physical object's size: smaller physical objects allow more virtual resizing (up to a 50% larger virtual size). Resized Grasping considerably broadens the scope of using illusions to provide rich haptic experiences in virtual reality.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115589513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}