首页 > 最新文献

Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Portal-ble: Intuitive Free-hand Manipulation in Unbounded Smartphone-based Augmented Reality Portal-ble:基于无限智能手机的增强现实中的直观自由操作
Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, J. Tompkin, J. Hughes, Jeff Huang
Smartphone augmented reality (AR) lets users interact with physical and virtual spaces simultaneously. With 3D hand tracking, smartphones become apparatus to grab and move virtual objects directly. Based on design considerations for interaction, mobility, and object appearance and physics, we implemented a prototype for portable 3D hand tracking using a smartphone, a Leap Motion controller, and a computation unit. Following an experience prototyping procedure, 12 researchers used the prototype to help explore usability issues and define the design space. We identified issues in perception (moving to the object, reaching for the object), manipulation (successfully grabbing and orienting the object), and behavioral understanding (knowing how to use the smartphone as a viewport). To overcome these issues, we designed object-based feedback and accommodation mechanisms and studied their perceptual and behavioral effects via two tasks: picking up distant objects, and assembling a virtual house from blocks. Our mechanisms enabled significantly faster and more successful user interaction than the initial prototype in picking up and manipulating stationary and moving objects, with a lower cognitive load and greater user preference. The resulting system---Portal-ble---improves user intuition and aids free-hand interactions in mobile situations.
智能手机增强现实(AR)可以让用户同时与物理和虚拟空间进行交互。借助3D手部追踪技术,智能手机成为了直接抓取和移动虚拟物体的设备。基于对交互、移动性、物体外观和物理的设计考虑,我们使用智能手机、Leap Motion控制器和计算单元实现了便携式3D手部跟踪的原型。12名研究人员按照体验原型程序,使用原型来帮助探索可用性问题和定义设计空间。我们确定了感知(移动到物体,到达物体),操作(成功抓取和定位物体)和行为理解(知道如何使用智能手机作为视口)方面的问题。为了克服这些问题,我们设计了基于对象的反馈和调节机制,并通过两个任务研究了它们的感知和行为效应:拾取远处的物体和用积木组装虚拟房屋。与最初的原型相比,我们的机制在拾取和操纵静止和移动物体方面实现了更快、更成功的用户交互,并且具有更低的认知负荷和更大的用户偏好。由此产生的系统Portal-ble提高了用户的直觉,并有助于在移动环境中进行徒手交互。
{"title":"Portal-ble: Intuitive Free-hand Manipulation in Unbounded Smartphone-based Augmented Reality","authors":"Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, J. Tompkin, J. Hughes, Jeff Huang","doi":"10.1145/3332165.3347904","DOIUrl":"https://doi.org/10.1145/3332165.3347904","url":null,"abstract":"Smartphone augmented reality (AR) lets users interact with physical and virtual spaces simultaneously. With 3D hand tracking, smartphones become apparatus to grab and move virtual objects directly. Based on design considerations for interaction, mobility, and object appearance and physics, we implemented a prototype for portable 3D hand tracking using a smartphone, a Leap Motion controller, and a computation unit. Following an experience prototyping procedure, 12 researchers used the prototype to help explore usability issues and define the design space. We identified issues in perception (moving to the object, reaching for the object), manipulation (successfully grabbing and orienting the object), and behavioral understanding (knowing how to use the smartphone as a viewport). To overcome these issues, we designed object-based feedback and accommodation mechanisms and studied their perceptual and behavioral effects via two tasks: picking up distant objects, and assembling a virtual house from blocks. Our mechanisms enabled significantly faster and more successful user interaction than the initial prototype in picking up and manipulating stationary and moving objects, with a lower cognitive load and greater user preference. The resulting system---Portal-ble---improves user intuition and aids free-hand interactions in mobile situations.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127627848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
CircuitStyle CircuitStyle
J. Davis, Jun Gong, Yunxin Sun, Parmit K. Chilana, Xing-Dong Yang
Instructors of hardware computing face many challenges including maintaining awareness of student progress, allocating their time adequately between lecturing and helping individual students, and keeping students engaged even while debugging problems. Based on formative interviews with 5 electronics instructors, we found that many circuit style behaviors could help novice users prevent or efficiently debug common problems. Drawing inspiration from the software engineering practice of coding style, these circuit style behaviors consist of best-practices and guidelines for implementing circuit prototypes that do not interfere with the functionality of the circuit, but help a circuit be more readable, less error-prone, and easier to debug. To examine if these circuit style behaviors could be peripherally enforced, aid an in-person instructor's ability to facilitate a workshop, and not monopolize instructor's attention, we developed CircuitStyle, a teaching aid for in-person hardware computing workshops. To evaluate the effectiveness of our tool, we deployed our system in an in-person maker-space workshop. The instructor appreciated CircuitStyle's ability to provide a broad understanding of the workshop's progress and the potential for our system to help instructors of various backgrounds better engage and understand the needs of their classroom.
{"title":"CircuitStyle","authors":"J. Davis, Jun Gong, Yunxin Sun, Parmit K. Chilana, Xing-Dong Yang","doi":"10.1145/3332165.3347920","DOIUrl":"https://doi.org/10.1145/3332165.3347920","url":null,"abstract":"Instructors of hardware computing face many challenges including maintaining awareness of student progress, allocating their time adequately between lecturing and helping individual students, and keeping students engaged even while debugging problems. Based on formative interviews with 5 electronics instructors, we found that many circuit style behaviors could help novice users prevent or efficiently debug common problems. Drawing inspiration from the software engineering practice of coding style, these circuit style behaviors consist of best-practices and guidelines for implementing circuit prototypes that do not interfere with the functionality of the circuit, but help a circuit be more readable, less error-prone, and easier to debug. To examine if these circuit style behaviors could be peripherally enforced, aid an in-person instructor's ability to facilitate a workshop, and not monopolize instructor's attention, we developed CircuitStyle, a teaching aid for in-person hardware computing workshops. To evaluate the effectiveness of our tool, we deployed our system in an in-person maker-space workshop. The instructor appreciated CircuitStyle's ability to provide a broad understanding of the workshop's progress and the potential for our system to help instructors of various backgrounds better engage and understand the needs of their classroom.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117174984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Gaze-Assisted Typing for Smart Glasses 智能眼镜的注视辅助打字
Sunggeun Ahn, Geehyuk Lee
Text entry is expected to be a common task for smart glass users, which is generally performed using a touchpad on the temple or by a promising approach using eye tracking. However, each approach has its own limitations. For more efficient text entry, we present the concept of gaze-assisted typing (GAT), which uses both a touchpad and eye tracking. We initially examined GAT with a minimal eye input load, and demonstrated that the GAT technology was 51% faster than a two-step touch input typing method (i.e.,M-SwipeBoard: 5.85 words per minute (wpm) and GAT: 8.87 wpm). We also compared GAT methods with varying numbers of touch gestures. The results showed that a GAT requiring five different touch gestures was the most preferred, although all GAT techniques were equally efficient. Finally, we compared GAT with touch-only typing (SwipeZone) and eye-only typing (adjustable dwell) using an eye-trackable head-worn display. The results demonstrate that the most preferred technique, GAT, was 25.4% faster than the eye-only typing and 29.4% faster than the touch-only typing (GAT: 11.04 wpm, eye-only typing: 8.81 wpm, and touch-only typing: 8.53 wpm).
文本输入预计将成为智能眼镜用户的一项常见任务,通常使用太阳穴上的触摸板或使用眼动追踪的有前途的方法来执行。然而,每种方法都有其局限性。为了更有效地输入文本,我们提出了注视辅助打字(GAT)的概念,它同时使用触摸板和眼动追踪。我们最初在最小的眼睛输入负荷下测试了GAT,并证明GAT技术比两步触摸输入打字方法(即M-SwipeBoard: 5.85个单词/分钟(wpm)和GAT: 8.87个单词/分钟)快51%。我们还将GAT方法与不同数量的触摸手势进行了比较。结果显示,需要五种不同触控手势的GAT是最受欢迎的,尽管所有GAT技术的效率都是一样的。最后,我们使用眼控头戴式显示器将GAT与纯触摸打字(SwipeZone)和纯眼睛打字(可调节的dwell)进行了比较。结果表明,GAT比纯眼打字快25.4%,比纯触打字快29.4% (GAT: 11.04 wpm,纯眼打字:8.81 wpm,纯触打字:8.53 wpm)。
{"title":"Gaze-Assisted Typing for Smart Glasses","authors":"Sunggeun Ahn, Geehyuk Lee","doi":"10.1145/3332165.3347883","DOIUrl":"https://doi.org/10.1145/3332165.3347883","url":null,"abstract":"Text entry is expected to be a common task for smart glass users, which is generally performed using a touchpad on the temple or by a promising approach using eye tracking. However, each approach has its own limitations. For more efficient text entry, we present the concept of gaze-assisted typing (GAT), which uses both a touchpad and eye tracking. We initially examined GAT with a minimal eye input load, and demonstrated that the GAT technology was 51% faster than a two-step touch input typing method (i.e.,M-SwipeBoard: 5.85 words per minute (wpm) and GAT: 8.87 wpm). We also compared GAT methods with varying numbers of touch gestures. The results showed that a GAT requiring five different touch gestures was the most preferred, although all GAT techniques were equally efficient. Finally, we compared GAT with touch-only typing (SwipeZone) and eye-only typing (adjustable dwell) using an eye-trackable head-worn display. The results demonstrate that the most preferred technique, GAT, was 25.4% faster than the eye-only typing and 29.4% faster than the touch-only typing (GAT: 11.04 wpm, eye-only typing: 8.81 wpm, and touch-only typing: 8.53 wpm).","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Session details: Session 8B: Touch Input 会话详细信息:会话8B:触摸输入
Mayank Goel
{"title":"Session details: Session 8B: Touch Input","authors":"Mayank Goel","doi":"10.1145/3368384","DOIUrl":"https://doi.org/10.1145/3368384","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117301981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Plane, Ray, and Point: Enabling Precise Spatial Manipulations with Shape Constraints 平面,光线和点:使用形状约束实现精确的空间操作
Devamardeep Hayatpur, Seongkook Heo, Haijun Xia, W. Stuerzlinger, Daniel J. Wigdor
We present Plane, Ray, and Point, a set of interaction techniques that utilizes shape constraints to enable quick and precise object alignment and manipulation in virtual reality. Users create the three types of shape constraints, Plane, Ray, and Point, by using symbolic gestures. The shape constraints are used like scaffoldings and limit and guide the movement of virtual objects that collide or intersect with them. The same set of gestures can be performed with the other hand, which allow users to further control the degrees of freedom for precise and constrained manipulation. The combination of shape constraints and bimanual gestures yield a rich set of interaction techniques to support object transformation. An exploratory study conducted with 3D design experts and novice users found the techniques to be useful in 3D scene design workflows and easy to learn and use.
我们介绍了平面、光线和点,这是一组利用形状约束的交互技术,可以在虚拟现实中实现快速、精确的对象对齐和操作。用户通过使用符号手势创建三种类型的形状约束:平面、光线和点。形状约束就像脚手架一样,限制和引导与它们碰撞或相交的虚拟物体的运动。同样的手势可以用另一只手执行,这允许用户进一步控制自由度,以实现精确和受限的操作。形状约束和双手手势的结合产生了一套丰富的交互技术来支持对象转换。一项与3D设计专家和新手用户进行的探索性研究发现,这些技术在3D场景设计工作流程中很有用,并且易于学习和使用。
{"title":"Plane, Ray, and Point: Enabling Precise Spatial Manipulations with Shape Constraints","authors":"Devamardeep Hayatpur, Seongkook Heo, Haijun Xia, W. Stuerzlinger, Daniel J. Wigdor","doi":"10.1145/3332165.3347916","DOIUrl":"https://doi.org/10.1145/3332165.3347916","url":null,"abstract":"We present Plane, Ray, and Point, a set of interaction techniques that utilizes shape constraints to enable quick and precise object alignment and manipulation in virtual reality. Users create the three types of shape constraints, Plane, Ray, and Point, by using symbolic gestures. The shape constraints are used like scaffoldings and limit and guide the movement of virtual objects that collide or intersect with them. The same set of gestures can be performed with the other hand, which allow users to further control the degrees of freedom for precise and constrained manipulation. The combination of shape constraints and bimanual gestures yield a rich set of interaction techniques to support object transformation. An exploratory study conducted with 3D design experts and novice users found the techniques to be useful in 3D scene design workflows and easy to learn and use.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131571782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light Probe 使用360相机作为光探针在捕捉时间优化人像照明
E. JaneL., Ohad Fried, Maneesh Agrawala
We present a capture-time tool designed to help casual photographers orient their subject to achieve a user-specified target facial appearance. The inputs to our tool are an HDR environment map of the scene captured using a 360 camera, and a target facial appearance, selected from a gallery of common studio lighting styles. Our tool computes the optimal orientation for the subject to achieve the target lighting using a computationally efficient precomputed radiance transfer-based approach. It then tells the photographer how far to rotate about the subject. Optionally, our tool can suggest how to orient a secondary external light source (e.g. a phone screen) about the subject's face to further improve the match to the target lighting. We demonstrate the effectiveness of our approach in a variety of indoor and outdoor scenes using many different subjects to achieve a variety of looks. A user evaluation suggests that our tool reduces the mental effort required by photographers to produce well-lit portraits.
我们提出了一个捕捉时间的工具,旨在帮助休闲摄影师定向他们的主题,以实现用户指定的目标面部外观。我们的工具的输入是使用360摄像机捕获的场景的HDR环境地图,以及从常见工作室照明风格画廊中选择的目标面部外观。我们的工具计算主体的最佳方向,以实现目标照明使用计算高效的预先计算的辐射转移为基础的方法。然后它告诉摄影师围绕主体旋转多远。可选地,我们的工具可以建议如何定位次要外部光源(例如手机屏幕),以进一步改善与目标照明的匹配。我们在各种室内和室外场景中展示了我们的方法的有效性,使用许多不同的主题来实现各种外观。用户评价表明,我们的工具减少了摄影师制作光线良好的肖像所需的脑力劳动。
{"title":"Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light Probe","authors":"E. JaneL., Ohad Fried, Maneesh Agrawala","doi":"10.1145/3332165.3347893","DOIUrl":"https://doi.org/10.1145/3332165.3347893","url":null,"abstract":"We present a capture-time tool designed to help casual photographers orient their subject to achieve a user-specified target facial appearance. The inputs to our tool are an HDR environment map of the scene captured using a 360 camera, and a target facial appearance, selected from a gallery of common studio lighting styles. Our tool computes the optimal orientation for the subject to achieve the target lighting using a computationally efficient precomputed radiance transfer-based approach. It then tells the photographer how far to rotate about the subject. Optionally, our tool can suggest how to orient a secondary external light source (e.g. a phone screen) about the subject's face to further improve the match to the target lighting. We demonstrate the effectiveness of our approach in a variety of indoor and outdoor scenes using many different subjects to achieve a variety of looks. A user evaluation suggests that our tool reduces the mental effort required by photographers to produce well-lit portraits.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130336433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
LightAnchors
Karan Ahuja, Sujeath Pareddy, R. Xiao, Mayank Goel, Chris Harrison
Augmented reality requires precise and instant overlay of digital information onto everyday objects. We present our work on LightAnchors, a new method for displaying spatially-anchored data. We take advantage of pervasive point lights - such as LEDs and light bulbs - for both in-view anchoring and data transmission. These lights are blinked at high speed to encode data. We built a proof-of-concept ap-plication that runs on iOS without any hardware or software modifications. We also ran a study to characterize the performance of LightAnchors and built eleven example demos to highlight the potential of our approach.
{"title":"LightAnchors","authors":"Karan Ahuja, Sujeath Pareddy, R. Xiao, Mayank Goel, Chris Harrison","doi":"10.1145/3332165.3347884","DOIUrl":"https://doi.org/10.1145/3332165.3347884","url":null,"abstract":"Augmented reality requires precise and instant overlay of digital information onto everyday objects. We present our work on LightAnchors, a new method for displaying spatially-anchored data. We take advantage of pervasive point lights - such as LEDs and light bulbs - for both in-view anchoring and data transmission. These lights are blinked at high speed to encode data. We built a proof-of-concept ap-plication that runs on iOS without any hardware or software modifications. We also ran a study to characterize the performance of LightAnchors and built eleven example demos to highlight the potential of our approach.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"60 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114015846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Self-healing UI: Mechanically and Electrically Self-healing Materials for Sensing and Actuation Interfaces 自修复UI:用于传感和驱动接口的机械和电气自修复材料
Koya Narumi, Fang Qin, Siyuan Liu, Huai-Yu Cheng, Jianzhe Gu, Y. Kawahara, Mohammad Islam, Lining Yao
Living things in nature have long been utilizing the ability to "heal" their wounds on the soft bodies to survive in the outer environment. In order to impart this self-healing property to our daily life interface, we propose Self-healing UI, a soft-bodied interface that can intrinsically self-heal damages without external stimuli or glue. The key material to achieving Self-healing UI is MWCNTs-PBS, a composite material of a self-healing polymer polyborosiloxane (PBS) and a filler material multi-walled carbon nanotubes (MWCNTs), which retains mechanical and electrical self-healability. We developed a hybrid model that combines PBS, MWCNTs-PBS, and other common soft materials including fabric and silicone to build interface devices with self-healing, sensing, and actuation capability. These devices were implemented by layer-by-layer stacking fabrication without glue or any post-processing, by leveraging the materials' inherent self-healing property between two layers. We then demonstrated sensing primitives and interactive applications that extend the design space of shape-changing interfaces with their ability to transform, conform, reconfigure, heal, and fuse, which we believe can enrich the toolbox of human-computer interaction (HCI).
长期以来,自然界的生物一直在利用柔软身体上的“愈合”伤口的能力,在外部环境中生存。为了将这种自愈特性赋予我们的日常生活界面,我们提出了自愈UI,这是一种软体界面,可以在没有外部刺激或胶水的情况下内在地自愈损伤。实现自修复UI的关键材料是MWCNTs-PBS,它是一种自修复聚合物聚硼硅氧烷(PBS)和填充材料多壁碳纳米管(MWCNTs)的复合材料,保留了机械和电气的自愈性。我们开发了一种混合模型,结合了PBS、MWCNTs-PBS和其他常见的软材料,包括织物和硅树脂,以构建具有自修复、传感和驱动能力的界面设备。这些设备是通过利用材料在两层之间固有的自修复特性,在没有胶水或任何后处理的情况下逐层堆叠制造而实现的。然后,我们展示了传感原语和交互式应用程序,它们扩展了形状变化界面的设计空间,具有转换、遵从、重新配置、修复和融合的能力,我们相信这可以丰富人机交互(HCI)的工具箱。
{"title":"Self-healing UI: Mechanically and Electrically Self-healing Materials for Sensing and Actuation Interfaces","authors":"Koya Narumi, Fang Qin, Siyuan Liu, Huai-Yu Cheng, Jianzhe Gu, Y. Kawahara, Mohammad Islam, Lining Yao","doi":"10.1145/3332165.3347901","DOIUrl":"https://doi.org/10.1145/3332165.3347901","url":null,"abstract":"Living things in nature have long been utilizing the ability to \"heal\" their wounds on the soft bodies to survive in the outer environment. In order to impart this self-healing property to our daily life interface, we propose Self-healing UI, a soft-bodied interface that can intrinsically self-heal damages without external stimuli or glue. The key material to achieving Self-healing UI is MWCNTs-PBS, a composite material of a self-healing polymer polyborosiloxane (PBS) and a filler material multi-walled carbon nanotubes (MWCNTs), which retains mechanical and electrical self-healability. We developed a hybrid model that combines PBS, MWCNTs-PBS, and other common soft materials including fabric and silicone to build interface devices with self-healing, sensing, and actuation capability. These devices were implemented by layer-by-layer stacking fabrication without glue or any post-processing, by leveraging the materials' inherent self-healing property between two layers. We then demonstrated sensing primitives and interactive applications that extend the design space of shape-changing interfaces with their ability to transform, conform, reconfigure, heal, and fuse, which we believe can enrich the toolbox of human-computer interaction (HCI).","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125770895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
MeCap: Whole-Body Digitization for Low-Cost VR/AR Headsets MeCap:低成本VR/AR头显的全身数字化
Karan Ahuja, Chris Harrison, Mayank Goel, R. Xiao
Low-cost, smartphone-powered VR/AR headsets are becoming more popular. These basic devices - little more than plastic or cardboard shells - lack advanced features, such as controllers for the hands, limiting their interactive capability. Moreover, even high-end consumer headsets lack the ability to track the body and face. For this reason, interactive experiences like social VR are underdeveloped. We introduce MeCap, which enables commodity VR headsets to be augmented with powerful motion capture ("MoCap") and user-sensing capabilities at very low cost (under $5). Using only a pair of hemi-spherical mirrors and the existing rear-facing camera of a smartphone, MeCap provides real-time estimates of a wearer's 3D body pose, hand pose, facial expression, physical appearance and surrounding environment - capabilities which are either absent in contemporary VR/AR systems or which require specialized hardware and controllers. We evaluate the accuracy of each of our tracking features, the results of which show imminent feasibility.
低成本、智能手机驱动的VR/AR头显正变得越来越流行。这些基本设备——只不过是塑料或纸板外壳——缺乏高级功能,比如手部控制器,限制了它们的交互能力。此外,即使是高端消费级耳机也缺乏追踪身体和面部的能力。因此,像社交VR这样的互动体验并不发达。我们推出MeCap,它使商品VR头显能够以非常低的成本(低于5美元)增强强大的动作捕捉(“MoCap”)和用户感应功能。仅使用一对半球面镜子和智能手机现有的后置摄像头,MeCap就可以实时估计佩戴者的3D身体姿势、手部姿势、面部表情、外表和周围环境——这些功能在当代VR/AR系统中要么是不存在的,要么需要专门的硬件和控制器。我们评估了每个跟踪特征的准确性,结果显示了即将实现的可行性。
{"title":"MeCap: Whole-Body Digitization for Low-Cost VR/AR Headsets","authors":"Karan Ahuja, Chris Harrison, Mayank Goel, R. Xiao","doi":"10.1145/3332165.3347889","DOIUrl":"https://doi.org/10.1145/3332165.3347889","url":null,"abstract":"Low-cost, smartphone-powered VR/AR headsets are becoming more popular. These basic devices - little more than plastic or cardboard shells - lack advanced features, such as controllers for the hands, limiting their interactive capability. Moreover, even high-end consumer headsets lack the ability to track the body and face. For this reason, interactive experiences like social VR are underdeveloped. We introduce MeCap, which enables commodity VR headsets to be augmented with powerful motion capture (\"MoCap\") and user-sensing capabilities at very low cost (under $5). Using only a pair of hemi-spherical mirrors and the existing rear-facing camera of a smartphone, MeCap provides real-time estimates of a wearer's 3D body pose, hand pose, facial expression, physical appearance and surrounding environment - capabilities which are either absent in contemporary VR/AR systems or which require specialized hardware and controllers. We evaluate the accuracy of each of our tracking features, the results of which show imminent feasibility.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130276305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Third-Person Piloting: Increasing Situational Awareness using a Spatially Coupled Second Drone 第三人称驾驶:使用空间耦合第二无人机增加态势感知
Ryotaro Temma, Kazuki Takashima, Kazuyuki Fujita, Koh Sueda, Y. Kitamura
We propose Third-Person Piloting, a novel drone manipulation interface that increases situational awareness using an interactive third-person perspective from a second, spatially coupled drone. The pilot uses a controller with a manipulatable miniature drone. Our algorithm understands the relationship between the pilot's eye position and the miniature drone and ensures that the same spatial relationship is maintained between the two real drones in the sky. This allows the pilot to obtain various third-person perspectives by changing the orientation of the miniature drone while maintaining standard primary drone control using the conventional controller. We design and implement a working prototype with programmable drones and propose several representative operation scenarios. We gather user feedback to obtain the initial insights of our interface design from novices, advanced beginners, and experts. Our result suggests that the interactive third-person perspective provided by the second drone offers sufficient potential for increasing situational awareness and supporting their primary drone operations.
我们提出了第三人称驾驶,这是一种新颖的无人机操作界面,可以从第二架空间耦合的无人机上使用交互式第三人称视角增加态势感知。飞行员使用一个带有可操纵微型无人机的控制器。我们的算法理解了飞行员的眼睛位置和微型无人机之间的关系,并确保两架真正的无人机在天空中保持相同的空间关系。这允许飞行员通过改变微型无人机的方向来获得各种第三人称视角,同时使用传统控制器保持标准的主要无人机控制。我们设计并实现了一个可编程无人机的工作原型,并提出了几个具有代表性的操作场景。我们收集用户反馈,从新手、高级初学者和专家那里获得对我们界面设计的初步见解。我们的研究结果表明,第二架无人机提供的交互式第三人称视角为增强态势感知和支持其主要无人机操作提供了足够的潜力。
{"title":"Third-Person Piloting: Increasing Situational Awareness using a Spatially Coupled Second Drone","authors":"Ryotaro Temma, Kazuki Takashima, Kazuyuki Fujita, Koh Sueda, Y. Kitamura","doi":"10.1145/3332165.3347953","DOIUrl":"https://doi.org/10.1145/3332165.3347953","url":null,"abstract":"We propose Third-Person Piloting, a novel drone manipulation interface that increases situational awareness using an interactive third-person perspective from a second, spatially coupled drone. The pilot uses a controller with a manipulatable miniature drone. Our algorithm understands the relationship between the pilot's eye position and the miniature drone and ensures that the same spatial relationship is maintained between the two real drones in the sky. This allows the pilot to obtain various third-person perspectives by changing the orientation of the miniature drone while maintaining standard primary drone control using the conventional controller. We design and implement a working prototype with programmable drones and propose several representative operation scenarios. We gather user feedback to obtain the initial insights of our interface design from novices, advanced beginners, and experts. Our result suggests that the interactive third-person perspective provided by the second drone offers sufficient potential for increasing situational awareness and supporting their primary drone operations.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130345560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1