Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, J. Tompkin, J. Hughes, Jeff Huang
Smartphone augmented reality (AR) lets users interact with physical and virtual spaces simultaneously. With 3D hand tracking, smartphones become apparatus to grab and move virtual objects directly. Based on design considerations for interaction, mobility, and object appearance and physics, we implemented a prototype for portable 3D hand tracking using a smartphone, a Leap Motion controller, and a computation unit. Following an experience prototyping procedure, 12 researchers used the prototype to help explore usability issues and define the design space. We identified issues in perception (moving to the object, reaching for the object), manipulation (successfully grabbing and orienting the object), and behavioral understanding (knowing how to use the smartphone as a viewport). To overcome these issues, we designed object-based feedback and accommodation mechanisms and studied their perceptual and behavioral effects via two tasks: picking up distant objects, and assembling a virtual house from blocks. Our mechanisms enabled significantly faster and more successful user interaction than the initial prototype in picking up and manipulating stationary and moving objects, with a lower cognitive load and greater user preference. The resulting system---Portal-ble---improves user intuition and aids free-hand interactions in mobile situations.
{"title":"Portal-ble: Intuitive Free-hand Manipulation in Unbounded Smartphone-based Augmented Reality","authors":"Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, J. Tompkin, J. Hughes, Jeff Huang","doi":"10.1145/3332165.3347904","DOIUrl":"https://doi.org/10.1145/3332165.3347904","url":null,"abstract":"Smartphone augmented reality (AR) lets users interact with physical and virtual spaces simultaneously. With 3D hand tracking, smartphones become apparatus to grab and move virtual objects directly. Based on design considerations for interaction, mobility, and object appearance and physics, we implemented a prototype for portable 3D hand tracking using a smartphone, a Leap Motion controller, and a computation unit. Following an experience prototyping procedure, 12 researchers used the prototype to help explore usability issues and define the design space. We identified issues in perception (moving to the object, reaching for the object), manipulation (successfully grabbing and orienting the object), and behavioral understanding (knowing how to use the smartphone as a viewport). To overcome these issues, we designed object-based feedback and accommodation mechanisms and studied their perceptual and behavioral effects via two tasks: picking up distant objects, and assembling a virtual house from blocks. Our mechanisms enabled significantly faster and more successful user interaction than the initial prototype in picking up and manipulating stationary and moving objects, with a lower cognitive load and greater user preference. The resulting system---Portal-ble---improves user intuition and aids free-hand interactions in mobile situations.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127627848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Davis, Jun Gong, Yunxin Sun, Parmit K. Chilana, Xing-Dong Yang
Instructors of hardware computing face many challenges including maintaining awareness of student progress, allocating their time adequately between lecturing and helping individual students, and keeping students engaged even while debugging problems. Based on formative interviews with 5 electronics instructors, we found that many circuit style behaviors could help novice users prevent or efficiently debug common problems. Drawing inspiration from the software engineering practice of coding style, these circuit style behaviors consist of best-practices and guidelines for implementing circuit prototypes that do not interfere with the functionality of the circuit, but help a circuit be more readable, less error-prone, and easier to debug. To examine if these circuit style behaviors could be peripherally enforced, aid an in-person instructor's ability to facilitate a workshop, and not monopolize instructor's attention, we developed CircuitStyle, a teaching aid for in-person hardware computing workshops. To evaluate the effectiveness of our tool, we deployed our system in an in-person maker-space workshop. The instructor appreciated CircuitStyle's ability to provide a broad understanding of the workshop's progress and the potential for our system to help instructors of various backgrounds better engage and understand the needs of their classroom.
{"title":"CircuitStyle","authors":"J. Davis, Jun Gong, Yunxin Sun, Parmit K. Chilana, Xing-Dong Yang","doi":"10.1145/3332165.3347920","DOIUrl":"https://doi.org/10.1145/3332165.3347920","url":null,"abstract":"Instructors of hardware computing face many challenges including maintaining awareness of student progress, allocating their time adequately between lecturing and helping individual students, and keeping students engaged even while debugging problems. Based on formative interviews with 5 electronics instructors, we found that many circuit style behaviors could help novice users prevent or efficiently debug common problems. Drawing inspiration from the software engineering practice of coding style, these circuit style behaviors consist of best-practices and guidelines for implementing circuit prototypes that do not interfere with the functionality of the circuit, but help a circuit be more readable, less error-prone, and easier to debug. To examine if these circuit style behaviors could be peripherally enforced, aid an in-person instructor's ability to facilitate a workshop, and not monopolize instructor's attention, we developed CircuitStyle, a teaching aid for in-person hardware computing workshops. To evaluate the effectiveness of our tool, we deployed our system in an in-person maker-space workshop. The instructor appreciated CircuitStyle's ability to provide a broad understanding of the workshop's progress and the potential for our system to help instructors of various backgrounds better engage and understand the needs of their classroom.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117174984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Text entry is expected to be a common task for smart glass users, which is generally performed using a touchpad on the temple or by a promising approach using eye tracking. However, each approach has its own limitations. For more efficient text entry, we present the concept of gaze-assisted typing (GAT), which uses both a touchpad and eye tracking. We initially examined GAT with a minimal eye input load, and demonstrated that the GAT technology was 51% faster than a two-step touch input typing method (i.e.,M-SwipeBoard: 5.85 words per minute (wpm) and GAT: 8.87 wpm). We also compared GAT methods with varying numbers of touch gestures. The results showed that a GAT requiring five different touch gestures was the most preferred, although all GAT techniques were equally efficient. Finally, we compared GAT with touch-only typing (SwipeZone) and eye-only typing (adjustable dwell) using an eye-trackable head-worn display. The results demonstrate that the most preferred technique, GAT, was 25.4% faster than the eye-only typing and 29.4% faster than the touch-only typing (GAT: 11.04 wpm, eye-only typing: 8.81 wpm, and touch-only typing: 8.53 wpm).
{"title":"Gaze-Assisted Typing for Smart Glasses","authors":"Sunggeun Ahn, Geehyuk Lee","doi":"10.1145/3332165.3347883","DOIUrl":"https://doi.org/10.1145/3332165.3347883","url":null,"abstract":"Text entry is expected to be a common task for smart glass users, which is generally performed using a touchpad on the temple or by a promising approach using eye tracking. However, each approach has its own limitations. For more efficient text entry, we present the concept of gaze-assisted typing (GAT), which uses both a touchpad and eye tracking. We initially examined GAT with a minimal eye input load, and demonstrated that the GAT technology was 51% faster than a two-step touch input typing method (i.e.,M-SwipeBoard: 5.85 words per minute (wpm) and GAT: 8.87 wpm). We also compared GAT methods with varying numbers of touch gestures. The results showed that a GAT requiring five different touch gestures was the most preferred, although all GAT techniques were equally efficient. Finally, we compared GAT with touch-only typing (SwipeZone) and eye-only typing (adjustable dwell) using an eye-trackable head-worn display. The results demonstrate that the most preferred technique, GAT, was 25.4% faster than the eye-only typing and 29.4% faster than the touch-only typing (GAT: 11.04 wpm, eye-only typing: 8.81 wpm, and touch-only typing: 8.53 wpm).","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 8B: Touch Input","authors":"Mayank Goel","doi":"10.1145/3368384","DOIUrl":"https://doi.org/10.1145/3368384","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117301981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Devamardeep Hayatpur, Seongkook Heo, Haijun Xia, W. Stuerzlinger, Daniel J. Wigdor
We present Plane, Ray, and Point, a set of interaction techniques that utilizes shape constraints to enable quick and precise object alignment and manipulation in virtual reality. Users create the three types of shape constraints, Plane, Ray, and Point, by using symbolic gestures. The shape constraints are used like scaffoldings and limit and guide the movement of virtual objects that collide or intersect with them. The same set of gestures can be performed with the other hand, which allow users to further control the degrees of freedom for precise and constrained manipulation. The combination of shape constraints and bimanual gestures yield a rich set of interaction techniques to support object transformation. An exploratory study conducted with 3D design experts and novice users found the techniques to be useful in 3D scene design workflows and easy to learn and use.
{"title":"Plane, Ray, and Point: Enabling Precise Spatial Manipulations with Shape Constraints","authors":"Devamardeep Hayatpur, Seongkook Heo, Haijun Xia, W. Stuerzlinger, Daniel J. Wigdor","doi":"10.1145/3332165.3347916","DOIUrl":"https://doi.org/10.1145/3332165.3347916","url":null,"abstract":"We present Plane, Ray, and Point, a set of interaction techniques that utilizes shape constraints to enable quick and precise object alignment and manipulation in virtual reality. Users create the three types of shape constraints, Plane, Ray, and Point, by using symbolic gestures. The shape constraints are used like scaffoldings and limit and guide the movement of virtual objects that collide or intersect with them. The same set of gestures can be performed with the other hand, which allow users to further control the degrees of freedom for precise and constrained manipulation. The combination of shape constraints and bimanual gestures yield a rich set of interaction techniques to support object transformation. An exploratory study conducted with 3D design experts and novice users found the techniques to be useful in 3D scene design workflows and easy to learn and use.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131571782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a capture-time tool designed to help casual photographers orient their subject to achieve a user-specified target facial appearance. The inputs to our tool are an HDR environment map of the scene captured using a 360 camera, and a target facial appearance, selected from a gallery of common studio lighting styles. Our tool computes the optimal orientation for the subject to achieve the target lighting using a computationally efficient precomputed radiance transfer-based approach. It then tells the photographer how far to rotate about the subject. Optionally, our tool can suggest how to orient a secondary external light source (e.g. a phone screen) about the subject's face to further improve the match to the target lighting. We demonstrate the effectiveness of our approach in a variety of indoor and outdoor scenes using many different subjects to achieve a variety of looks. A user evaluation suggests that our tool reduces the mental effort required by photographers to produce well-lit portraits.
{"title":"Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light Probe","authors":"E. JaneL., Ohad Fried, Maneesh Agrawala","doi":"10.1145/3332165.3347893","DOIUrl":"https://doi.org/10.1145/3332165.3347893","url":null,"abstract":"We present a capture-time tool designed to help casual photographers orient their subject to achieve a user-specified target facial appearance. The inputs to our tool are an HDR environment map of the scene captured using a 360 camera, and a target facial appearance, selected from a gallery of common studio lighting styles. Our tool computes the optimal orientation for the subject to achieve the target lighting using a computationally efficient precomputed radiance transfer-based approach. It then tells the photographer how far to rotate about the subject. Optionally, our tool can suggest how to orient a secondary external light source (e.g. a phone screen) about the subject's face to further improve the match to the target lighting. We demonstrate the effectiveness of our approach in a variety of indoor and outdoor scenes using many different subjects to achieve a variety of looks. A user evaluation suggests that our tool reduces the mental effort required by photographers to produce well-lit portraits.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130336433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karan Ahuja, Sujeath Pareddy, R. Xiao, Mayank Goel, Chris Harrison
Augmented reality requires precise and instant overlay of digital information onto everyday objects. We present our work on LightAnchors, a new method for displaying spatially-anchored data. We take advantage of pervasive point lights - such as LEDs and light bulbs - for both in-view anchoring and data transmission. These lights are blinked at high speed to encode data. We built a proof-of-concept ap-plication that runs on iOS without any hardware or software modifications. We also ran a study to characterize the performance of LightAnchors and built eleven example demos to highlight the potential of our approach.
{"title":"LightAnchors","authors":"Karan Ahuja, Sujeath Pareddy, R. Xiao, Mayank Goel, Chris Harrison","doi":"10.1145/3332165.3347884","DOIUrl":"https://doi.org/10.1145/3332165.3347884","url":null,"abstract":"Augmented reality requires precise and instant overlay of digital information onto everyday objects. We present our work on LightAnchors, a new method for displaying spatially-anchored data. We take advantage of pervasive point lights - such as LEDs and light bulbs - for both in-view anchoring and data transmission. These lights are blinked at high speed to encode data. We built a proof-of-concept ap-plication that runs on iOS without any hardware or software modifications. We also ran a study to characterize the performance of LightAnchors and built eleven example demos to highlight the potential of our approach.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"60 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114015846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koya Narumi, Fang Qin, Siyuan Liu, Huai-Yu Cheng, Jianzhe Gu, Y. Kawahara, Mohammad Islam, Lining Yao
Living things in nature have long been utilizing the ability to "heal" their wounds on the soft bodies to survive in the outer environment. In order to impart this self-healing property to our daily life interface, we propose Self-healing UI, a soft-bodied interface that can intrinsically self-heal damages without external stimuli or glue. The key material to achieving Self-healing UI is MWCNTs-PBS, a composite material of a self-healing polymer polyborosiloxane (PBS) and a filler material multi-walled carbon nanotubes (MWCNTs), which retains mechanical and electrical self-healability. We developed a hybrid model that combines PBS, MWCNTs-PBS, and other common soft materials including fabric and silicone to build interface devices with self-healing, sensing, and actuation capability. These devices were implemented by layer-by-layer stacking fabrication without glue or any post-processing, by leveraging the materials' inherent self-healing property between two layers. We then demonstrated sensing primitives and interactive applications that extend the design space of shape-changing interfaces with their ability to transform, conform, reconfigure, heal, and fuse, which we believe can enrich the toolbox of human-computer interaction (HCI).
{"title":"Self-healing UI: Mechanically and Electrically Self-healing Materials for Sensing and Actuation Interfaces","authors":"Koya Narumi, Fang Qin, Siyuan Liu, Huai-Yu Cheng, Jianzhe Gu, Y. Kawahara, Mohammad Islam, Lining Yao","doi":"10.1145/3332165.3347901","DOIUrl":"https://doi.org/10.1145/3332165.3347901","url":null,"abstract":"Living things in nature have long been utilizing the ability to \"heal\" their wounds on the soft bodies to survive in the outer environment. In order to impart this self-healing property to our daily life interface, we propose Self-healing UI, a soft-bodied interface that can intrinsically self-heal damages without external stimuli or glue. The key material to achieving Self-healing UI is MWCNTs-PBS, a composite material of a self-healing polymer polyborosiloxane (PBS) and a filler material multi-walled carbon nanotubes (MWCNTs), which retains mechanical and electrical self-healability. We developed a hybrid model that combines PBS, MWCNTs-PBS, and other common soft materials including fabric and silicone to build interface devices with self-healing, sensing, and actuation capability. These devices were implemented by layer-by-layer stacking fabrication without glue or any post-processing, by leveraging the materials' inherent self-healing property between two layers. We then demonstrated sensing primitives and interactive applications that extend the design space of shape-changing interfaces with their ability to transform, conform, reconfigure, heal, and fuse, which we believe can enrich the toolbox of human-computer interaction (HCI).","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125770895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low-cost, smartphone-powered VR/AR headsets are becoming more popular. These basic devices - little more than plastic or cardboard shells - lack advanced features, such as controllers for the hands, limiting their interactive capability. Moreover, even high-end consumer headsets lack the ability to track the body and face. For this reason, interactive experiences like social VR are underdeveloped. We introduce MeCap, which enables commodity VR headsets to be augmented with powerful motion capture ("MoCap") and user-sensing capabilities at very low cost (under $5). Using only a pair of hemi-spherical mirrors and the existing rear-facing camera of a smartphone, MeCap provides real-time estimates of a wearer's 3D body pose, hand pose, facial expression, physical appearance and surrounding environment - capabilities which are either absent in contemporary VR/AR systems or which require specialized hardware and controllers. We evaluate the accuracy of each of our tracking features, the results of which show imminent feasibility.
{"title":"MeCap: Whole-Body Digitization for Low-Cost VR/AR Headsets","authors":"Karan Ahuja, Chris Harrison, Mayank Goel, R. Xiao","doi":"10.1145/3332165.3347889","DOIUrl":"https://doi.org/10.1145/3332165.3347889","url":null,"abstract":"Low-cost, smartphone-powered VR/AR headsets are becoming more popular. These basic devices - little more than plastic or cardboard shells - lack advanced features, such as controllers for the hands, limiting their interactive capability. Moreover, even high-end consumer headsets lack the ability to track the body and face. For this reason, interactive experiences like social VR are underdeveloped. We introduce MeCap, which enables commodity VR headsets to be augmented with powerful motion capture (\"MoCap\") and user-sensing capabilities at very low cost (under $5). Using only a pair of hemi-spherical mirrors and the existing rear-facing camera of a smartphone, MeCap provides real-time estimates of a wearer's 3D body pose, hand pose, facial expression, physical appearance and surrounding environment - capabilities which are either absent in contemporary VR/AR systems or which require specialized hardware and controllers. We evaluate the accuracy of each of our tracking features, the results of which show imminent feasibility.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130276305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryotaro Temma, Kazuki Takashima, Kazuyuki Fujita, Koh Sueda, Y. Kitamura
We propose Third-Person Piloting, a novel drone manipulation interface that increases situational awareness using an interactive third-person perspective from a second, spatially coupled drone. The pilot uses a controller with a manipulatable miniature drone. Our algorithm understands the relationship between the pilot's eye position and the miniature drone and ensures that the same spatial relationship is maintained between the two real drones in the sky. This allows the pilot to obtain various third-person perspectives by changing the orientation of the miniature drone while maintaining standard primary drone control using the conventional controller. We design and implement a working prototype with programmable drones and propose several representative operation scenarios. We gather user feedback to obtain the initial insights of our interface design from novices, advanced beginners, and experts. Our result suggests that the interactive third-person perspective provided by the second drone offers sufficient potential for increasing situational awareness and supporting their primary drone operations.
{"title":"Third-Person Piloting: Increasing Situational Awareness using a Spatially Coupled Second Drone","authors":"Ryotaro Temma, Kazuki Takashima, Kazuyuki Fujita, Koh Sueda, Y. Kitamura","doi":"10.1145/3332165.3347953","DOIUrl":"https://doi.org/10.1145/3332165.3347953","url":null,"abstract":"We propose Third-Person Piloting, a novel drone manipulation interface that increases situational awareness using an interactive third-person perspective from a second, spatially coupled drone. The pilot uses a controller with a manipulatable miniature drone. Our algorithm understands the relationship between the pilot's eye position and the miniature drone and ensures that the same spatial relationship is maintained between the two real drones in the sky. This allows the pilot to obtain various third-person perspectives by changing the orientation of the miniature drone while maintaining standard primary drone control using the conventional controller. We design and implement a working prototype with programmable drones and propose several representative operation scenarios. We gather user feedback to obtain the initial insights of our interface design from novices, advanced beginners, and experts. Our result suggests that the interactive third-person perspective provided by the second drone offers sufficient potential for increasing situational awareness and supporting their primary drone operations.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130345560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}