首页 > 最新文献

Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology最新文献

英文 中文
Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology 第28届ACM用户界面软件与技术年度研讨会论文集
C. Latulipe, Bjoern Hartmann, Tovi Grossman
We are very excited to welcome you to the 28th Annual ACM Symposium on User Interface Software and Technology (UIST), held from November 8-11th 2015, in Charlotte, North Carolina, USA. UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from diverse areas including graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, fabrication, wearable computing and CSCW. UIST 2015 received 297 technical paper submissions. After a thorough review process, the 39-member program committee accepted 70 papers (23.6%). Each anonymous submission that entered the full review process was first reviewed by three external reviewers, and a meta-review was provided by a program committee member. If, after these four reviews, the submission was deemed to pass a rebuttal threshold, we asked the authors to submit a short rebuttal addressing the reviewers' concerns. A second member of the program committee was then asked to examine the paper, rebuttal, and reviews, and to provide their own meta-review. The program committee met in person in Berkeley, California, USA on June 25th and 26th, 2015, to select which papers to invite for the program. Submissions were accepted only after the authors provided a final revision addressing the committee's comments. In addition to papers, our program includes two papers from the ACM Transactions on Computer-Human Interaction journal (TOCHI), as well as 22 posters, 45 demonstrations, and 8 student presentations in the eleventh annual Doctoral Symposium. Our program also features the seventh annual Student Innovation Contest. Teams from all over the world will compete in this year's contest, which focuses on blurring the lines between art and engineering and creating tools for robotic storytelling. UIST 2015 will feature two keynote presentations. The opening keynote will be given by Ramesh Raskar (MIT Media Lab) on extreme computational imaging. Blaise Aguera Y Arcas from Google will deliver the closing keynote on machine intelligence. We welcome you to Charlotte, a city full of southern hospitality. We hope that you will find the technical program interesting and thought-provoking. We also hope that UIST 2015 will provide you with enjoyable opportunities to engage with fellow researchers from both industry and academia, from institutions around the world.
我们非常高兴地欢迎您参加2015年11月8日至11日在美国北卡罗来纳州夏洛特市举行的第28届ACM用户界面软件与技术年会(UIST)。UIST是展示软件和人机界面技术研究创新的主要论坛。UIST由ACM的人机交互(SIGCHI)和计算机图形学(SIGGRAPH)特别兴趣小组赞助,汇集了来自不同领域的研究人员和从业人员,包括图形和网络用户界面,有形和无处不在的计算,虚拟和增强现实,多媒体,新的输入和输出设备,制造,可穿戴计算和CSCW。UIST 2015共收到科技论文297篇。39人组成的计划委员会经过彻底的审查,接受了70篇论文(23.6%)。每个进入完整评审过程的匿名提交首先由三位外部评审人员进行评审,并由项目委员会成员提供元评审。如果在这四次评审之后,投稿被认为通过了一个反驳的门槛,我们要求作者提交一个简短的反驳来解决审稿人关注的问题。然后,项目委员会的另一名成员被要求检查论文、反驳和评论,并提供他们自己的元评论。项目委员会于2015年6月25日至26日在美国加州伯克利亲自召开会议,遴选项目邀请论文。只有在作者提供了针对委员会意见的最终修订后,才接受提交的意见。除了论文外,我们的项目还包括两篇来自ACM人机交互学报(TOCHI)的论文,以及第十一届博士研讨会上的22张海报,45个演示和8个学生演讲。我们的项目还包括第七届年度学生创新大赛。来自世界各地的团队将参加今年的比赛,比赛的重点是模糊艺术和工程之间的界限,并为机器人讲故事创造工具。UIST 2015将有两个主题演讲。开幕主题演讲将由Ramesh Raskar(麻省理工学院媒体实验室)发表关于极限计算成像的演讲。来自谷歌的Blaise Aguera Y Arcas将发表关于机器智能的闭幕主题演讲。我们欢迎您来到夏洛特,一个充满南方热情好客的城市。我们希望你会觉得这个技术项目很有趣,发人深省。我们也希望UIST 2015将为您提供愉快的机会,与来自世界各地的产业界和学术界的研究人员进行交流。
{"title":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","authors":"C. Latulipe, Bjoern Hartmann, Tovi Grossman","doi":"10.1145/2807442","DOIUrl":"https://doi.org/10.1145/2807442","url":null,"abstract":"We are very excited to welcome you to the 28th Annual ACM Symposium on User Interface Software and Technology (UIST), held from November 8-11th 2015, in Charlotte, North Carolina, USA. \u0000 \u0000UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from diverse areas including graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, fabrication, wearable computing and CSCW. \u0000 \u0000UIST 2015 received 297 technical paper submissions. After a thorough review process, the 39-member program committee accepted 70 papers (23.6%). Each anonymous submission that entered the full review process was first reviewed by three external reviewers, and a meta-review was provided by a program committee member. If, after these four reviews, the submission was deemed to pass a rebuttal threshold, we asked the authors to submit a short rebuttal addressing the reviewers' concerns. A second member of the program committee was then asked to examine the paper, rebuttal, and reviews, and to provide their own meta-review. The program committee met in person in Berkeley, California, USA on June 25th and 26th, 2015, to select which papers to invite for the program. Submissions were accepted only after the authors provided a final revision addressing the committee's comments. \u0000 \u0000In addition to papers, our program includes two papers from the ACM Transactions on Computer-Human Interaction journal (TOCHI), as well as 22 posters, 45 demonstrations, and 8 student presentations in the eleventh annual Doctoral Symposium. Our program also features the seventh annual Student Innovation Contest. Teams from all over the world will compete in this year's contest, which focuses on blurring the lines between art and engineering and creating tools for robotic storytelling. UIST 2015 will feature two keynote presentations. The opening keynote will be given by Ramesh Raskar (MIT Media Lab) on extreme computational imaging. Blaise Aguera Y Arcas from Google will deliver the closing keynote on machine intelligence. \u0000 \u0000We welcome you to Charlotte, a city full of southern hospitality. We hope that you will find the technical program interesting and thought-provoking. We also hope that UIST 2015 will provide you with enjoyable opportunities to engage with fellow researchers from both industry and academia, from institutions around the world.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129876330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabrications Using Single Shape-Sensing Strip 柔性弯曲:使用单一形状感应带实现多部分可变形制造的交互性
Chin-yu Chien, Rong-Hao Liang, Long-Fei Lin, Liwei Chan, Bing-Yu Chen
This paper presents FlexiBend, an easily installable shape-sensing strip that enables interactivity of multi-part, deformable fabrications. The flexible sensor strip is composed of a dense linear array of strain gauges, therefore it has shape sensing capability. After installation, FlexiBend can simultaneously sense user inputs in different parts of a fabrication or even capture the geometry of a deformable fabrication.
本文介绍了FlexiBend,一种易于安装的形状感应带,可实现多部分可变形制造的交互性。柔性传感器条由密集的应变片线性阵列组成,因此具有形状传感能力。安装后,FlexiBend可以同时感知用户在制造的不同部分的输入,甚至可以捕获可变形制造的几何形状。
{"title":"FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabrications Using Single Shape-Sensing Strip","authors":"Chin-yu Chien, Rong-Hao Liang, Long-Fei Lin, Liwei Chan, Bing-Yu Chen","doi":"10.1145/2807442.2807456","DOIUrl":"https://doi.org/10.1145/2807442.2807456","url":null,"abstract":"This paper presents FlexiBend, an easily installable shape-sensing strip that enables interactivity of multi-part, deformable fabrications. The flexible sensor strip is composed of a dense linear array of strain gauges, therefore it has shape sensing capability. After installation, FlexiBend can simultaneously sense user inputs in different parts of a fabrication or even capture the geometry of a deformable fabrication.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128594048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Explaining Visual Changes in Web Interfaces 解释Web界面的视觉变化
Brian Burg, Amy J. Ko, Michael D. Ernst
Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior. The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.
Web开发人员经常想要重新定义来自第三方Web页面的交互行为,但是很难找到实现该行为的特定源代码。这项任务是具有挑战性的,因为开发人员必须找到并连接所有基于事件的JavaScript代码、声明式CSS样式和结合起来表达行为的网页内容之间的非本地交互。Scry工具包含了一种定位实现交互行为的代码的新方法。开发人员选择一个页面元素;每当元素发生变化时,Scry就会捕获渲染引擎对元素的输入(DOM、CSS)和输出(屏幕截图)。对于任何两个捕获的元素状态,Scry可以计算出状态的不同以及哪行JavaScript代码负责。使用Scry,开发人员可以通过选择两种输出状态来定位交互行为的实现;Scry表示直接导致它们差异的JavaScript代码。
{"title":"Explaining Visual Changes in Web Interfaces","authors":"Brian Burg, Amy J. Ko, Michael D. Ernst","doi":"10.1145/2807442.2807473","DOIUrl":"https://doi.org/10.1145/2807442.2807473","url":null,"abstract":"Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior. The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124568610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Virtual Replicas for Remote Assistance in Virtual and Augmented Reality 虚拟和增强现实中远程协助的虚拟复制品
Ohan Oda, Carmine Elvezio, Mengu Sukan, Steven K. Feiner, B. Tversky
In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local user's environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.
在许多复杂的任务中,远程主题专家可能需要帮助本地用户指导对本地用户环境中对象的操作。然而,在远程物理环境中有效的空间参考和动作演示可能具有挑战性。我们介绍了两种方法,为远程专家使用虚拟现实(VR)或增强现实(AR),为本地用户使用AR,每个人都戴着立体头戴式显示器。这两种方法都允许专家在本地环境中创建和操作物理对象的虚拟副本,以引用这些物理对象的部分并指示对它们的操作。这对于闭塞或难以接近的部件尤其有用。在一种方法中,专家在3D中指向虚拟复制品的部分来注释它们。在另一种方法中,专家通过操纵由约束和注释支持的虚拟副本在3D中演示动作。我们对6DOF对齐任务进行了用户研究,这是许多物理任务领域的关键操作,并将这两种方法与专家使用2D平板绘图系统的方法进行了比较,该方法类似于先前为远程辅助工作开发的方法。研究表明,3D演示方法比其他方法更快。此外,在训练有素的专家的情况下,3D指向方法比2D平板电脑更快。
{"title":"Virtual Replicas for Remote Assistance in Virtual and Augmented Reality","authors":"Ohan Oda, Carmine Elvezio, Mengu Sukan, Steven K. Feiner, B. Tversky","doi":"10.1145/2807442.2807497","DOIUrl":"https://doi.org/10.1145/2807442.2807497","url":null,"abstract":"In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local user's environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120964154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 146
Printem: Instant Printed Circuit Boards with Standard Office Printers & Inks 印刷:即时印刷电路板与标准办公室打印机和墨水
Perumal Varun Chadalavada, Daniel J. Wigdor
Printem film, a novel method for the fabrication of Printed Circuit Boards (PCBs) for small batch/prototyping use, is presented. Printem film enables a standard office inkjet or laser printer, using standard inks, to produce a PCB: the user prints a negative of the PCB onto the film, exposes it to UV or sunlight, and then tears-away the unneeded portion of the film, leaving-behind a copper PCB. PCBs produced with Printem film are as conductive as PCBs created using standard industrial methods. Herein, the composition of Printem film is described, and advantages of various materials discussed. Sample applications are also described, each of which demonstrates some unique advantage of Printem film over current prototyping methods: conductivity, flexibility, the ability to be cut with a pair of scissors, and the ability to be mounted to a rigid backplane. NOTE: publication of full-text held until November 9, 2015.
介绍了一种用于小批量/原型制作的印刷电路板(pcb)的新方法——Printem薄膜。Printem薄膜使标准的办公室喷墨打印机或激光打印机,使用标准的油墨,生产PCB:用户打印PCB的负片到薄膜上,将其暴露在紫外线或阳光下,然后撕下薄膜的不需要的部分,留下一个铜PCB。用Printem薄膜生产的pcb与使用标准工业方法生产的pcb一样导电。本文介绍了Printem薄膜的组成,并讨论了各种材料的优点。还描述了示例应用,其中每个应用都展示了Printem薄膜相对于当前原型方法的一些独特优势:导电性,灵活性,用剪刀切割的能力,以及安装在刚性背板上的能力。注:全文出版截止至2015年11月9日。
{"title":"Printem: Instant Printed Circuit Boards with Standard Office Printers & Inks","authors":"Perumal Varun Chadalavada, Daniel J. Wigdor","doi":"10.1145/2807442.2807511","DOIUrl":"https://doi.org/10.1145/2807442.2807511","url":null,"abstract":"Printem film, a novel method for the fabrication of Printed Circuit Boards (PCBs) for small batch/prototyping use, is presented. Printem film enables a standard office inkjet or laser printer, using standard inks, to produce a PCB: the user prints a negative of the PCB onto the film, exposes it to UV or sunlight, and then tears-away the unneeded portion of the film, leaving-behind a copper PCB. PCBs produced with Printem film are as conductive as PCBs created using standard industrial methods. Herein, the composition of Printem film is described, and advantages of various materials discussed. Sample applications are also described, each of which demonstrates some unique advantage of Printem film over current prototyping methods: conductivity, flexibility, the ability to be cut with a pair of scissors, and the ability to be mounted to a rigid backplane. NOTE: publication of full-text held until November 9, 2015.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116273601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
MoveableMaker: Facilitating the Design, Generation, and Assembly of Moveable Papercraft MoveableMaker:促进可移动纸工艺品的设计,生成和组装
M. Annett, Tovi Grossman, Daniel J. Wigdor, G. Fitzmaurice
In this work, we explore moveables, i.e., interactive papercraft that harness user interaction to generate visual effects. First, we present a survey of children's books that captured the state of the art of moveables. The results of this survey were synthesized into a moveable taxonomy and informed MoveableMaker, a new tool to assist users in designing, generating, and assembling moveable papercraft. MoveableMaker supports the creation and customization of a number of moveable effects and employs moveable-specific features including animated tooltips, automatic instruction generation, constraint-based rendering, techniques to reduce material waste, and so on. To understand how MoveableMaker encourages creativity and enhances the workflow when creating moveables, a series of exploratory workshops were conducted. The results of these explorations, including the content participants created and their impressions, are discussed, along with avenues for future research involving moveables.
在这项工作中,我们探索可移动的,即,利用用户交互产生视觉效果的交互式纸工艺。首先,我们对儿童书籍进行了调查,这些书籍反映了可移动书籍的艺术现状。这项调查的结果被综合到一个可移动的分类中,并被告知MoveableMaker,一个帮助用户设计、生成和组装可移动纸工艺品的新工具。MoveableMaker支持创建和定制许多可移动效果,并采用可移动的特定功能,包括动画工具提示、自动指令生成、基于约束的渲染、减少材料浪费的技术等等。为了了解MoveableMaker在创建可移动设备时如何鼓励创造力并增强工作流程,进行了一系列探索性研讨会。讨论了这些探索的结果,包括参与者创造的内容和他们的印象,以及未来涉及可移动设备的研究途径。
{"title":"MoveableMaker: Facilitating the Design, Generation, and Assembly of Moveable Papercraft","authors":"M. Annett, Tovi Grossman, Daniel J. Wigdor, G. Fitzmaurice","doi":"10.1145/2807442.2807483","DOIUrl":"https://doi.org/10.1145/2807442.2807483","url":null,"abstract":"In this work, we explore moveables, i.e., interactive papercraft that harness user interaction to generate visual effects. First, we present a survey of children's books that captured the state of the art of moveables. The results of this survey were synthesized into a moveable taxonomy and informed MoveableMaker, a new tool to assist users in designing, generating, and assembling moveable papercraft. MoveableMaker supports the creation and customization of a number of moveable effects and employs moveable-specific features including animated tooltips, automatic instruction generation, constraint-based rendering, techniques to reduce material waste, and so on. To understand how MoveableMaker encourages creativity and enhances the workflow when creating moveables, a series of exploratory workshops were conducted. The results of these explorations, including the content participants created and their impressions, are discussed, along with avenues for future research involving moveables.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116497006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Sensing Tablet Grasp + Micro-mobility for Active Reading 感应平板抓取+微移动主动阅读
Dongwook Yoon, K. Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, M. Pahud, M. Gavriliu
The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.
物理人工制品(如纸质文档)的定向和重新定位,以提供对内容的共享查看,或将他人的注意力引导到特定的细节上,被称为微移动。但是抓取在微移动中的作用很少被考虑,更不用说被设备感知了。因此,我们采用电容式抓握传感和惯性运动,在主动阅读的背景下,通过考虑三种技术,探索抓握+微动结合的设计空间。单用户、单设备技术支持用手指为页面添加书签等受握感影响的行为,但将其与物理体现相结合,允许翻回以前的位置。多用户,单设备技术,如将平板电脑传递给另一个用户或在单个设备上并行工作,为协同协作添加了新的细微差别。单用户、多设备技术可以方便地跨设备交叉引用内容。基于对抓取和微移动性的观察,这些技术为电子文档的个人和协作交互开辟了新的可能性。
{"title":"Sensing Tablet Grasp + Micro-mobility for Active Reading","authors":"Dongwook Yoon, K. Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, M. Pahud, M. Gavriliu","doi":"10.1145/2807442.2807510","DOIUrl":"https://doi.org/10.1145/2807442.2807510","url":null,"abstract":"The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133501974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency 自我中心视觉显著性自校准头戴式眼动仪
Yusuke Sugano, A. Bulling
Head-mounted eye tracking has significant potential for gaze-based applications such as life logging, mental health monitoring, or the quantified self. A neglected challenge for the long-term recordings required by these applications is that drift in the initial person-specific eye tracker calibration, for example caused by physical activity, can severely impact gaze estimation accuracy and thus system performance and user experience. We first analyse calibration drift on a new dataset of natural gaze data recorded using synchronised video-based and Electrooculography-based eye trackers of 20 users performing everyday activities in a mobile setting. Based on this analysis we present a method to automatically self-calibrate head-mounted eye trackers based on a computational model of bottom-up visual saliency. Through evaluations on the dataset we show that our method 1) is effective in reducing calibration drift in calibrated eye trackers and 2) given sufficient data, can achieve gaze estimation accuracy competitive with that of a calibrated eye tracker, without any manual calibration.
头戴式眼动追踪在诸如生活记录、心理健康监测或量化自我等基于凝视的应用中具有巨大的潜力。对于这些应用程序所需的长期记录来说,一个被忽视的挑战是,初始个人特定眼动仪校准中的漂移,例如由身体活动引起的漂移,可能严重影响凝视估计的准确性,从而影响系统性能和用户体验。我们首先分析了在一个新的自然凝视数据集上的校准漂移,该数据集使用同步视频和基于眼电的眼动仪记录了20名在移动环境中进行日常活动的用户。在此基础上,提出了一种基于自下而上视觉显著性计算模型的头戴式眼动仪自动自校准方法。通过对数据集的评估,我们表明,我们的方法1)有效地减少了校准后眼动仪的校准漂移;2)在数据充足的情况下,无需手动校准,即可实现与校准后眼动仪相媲美的凝视估计精度。
{"title":"Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency","authors":"Yusuke Sugano, A. Bulling","doi":"10.1145/2807442.2807445","DOIUrl":"https://doi.org/10.1145/2807442.2807445","url":null,"abstract":"Head-mounted eye tracking has significant potential for gaze-based applications such as life logging, mental health monitoring, or the quantified self. A neglected challenge for the long-term recordings required by these applications is that drift in the initial person-specific eye tracker calibration, for example caused by physical activity, can severely impact gaze estimation accuracy and thus system performance and user experience. We first analyse calibration drift on a new dataset of natural gaze data recorded using synchronised video-based and Electrooculography-based eye trackers of 20 users performing everyday activities in a mobile setting. Based on this analysis we present a method to automatically self-calibrate head-mounted eye trackers based on a computational model of bottom-up visual saliency. Through evaluations on the dataset we show that our method 1) is effective in reducing calibration drift in calibrated eye trackers and 2) given sufficient data, can achieve gaze estimation accuracy competitive with that of a calibrated eye tracker, without any manual calibration.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121894775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Anger-based BCI Using fNIRS Neurofeedback 基于愤怒的脑机接口使用fNIRS神经反馈
Gabor Aranyi, Fred Charles, M. Cavazza
Functional near-infrared spectroscopy (fNIRS) holds increasing potential for Brain-Computer Interfaces (BCI) due to its portability, ease of application, robustness to movement artifacts, and relatively low cost. The use of fNIRS to support the development of affective BCI has received comparatively less attention, despite the role played by the prefrontal cortex in affective control, and the appropriateness of fNIRS to measure prefrontal activity. We present an active, fNIRS-based neurofeedback (NF) interface, which uses differential changes in oxygenation between the left and right sides of the dorsolateral prefrontal cortex to operationalize BCI input. The system is activated by users generating a state of anger, which has been previously linked to increased left prefrontal asymmetry. We have incorporated this NF interface into an experimental platform adapted from a virtual 3D narrative, in which users can express anger at a virtual character perceived as evil, causing the character to disappear progressively. Eleven subjects used the system and were able to successfully perform NF despite minimal training. Extensive analysis confirms that success was associated with the intent to express anger. This has positive implications for the design of affective BCI based on prefrontal asymmetry.
功能性近红外光谱(fNIRS)由于其便携性、易于应用、对运动伪影的鲁棒性和相对较低的成本,在脑机接口(BCI)中具有越来越大的潜力。尽管前额叶皮层在情感控制中发挥作用,并且fNIRS测量前额叶活动的适宜性,但使用fNIRS支持情感性脑机接口的发展受到的关注相对较少。我们提出了一个活跃的、基于fnir的神经反馈(NF)接口,它利用左、右背外侧前额叶皮层之间氧合的差异变化来操作脑机接口输入。当用户产生愤怒的状态时,该系统就会被激活,这与左前额叶不对称的增加有关。我们已经将这个NF接口整合到一个虚拟3D叙事的实验平台中,在这个平台中,用户可以对一个被认为是邪恶的虚拟角色表达愤怒,从而使这个角色逐渐消失。11名受试者使用了该系统,并且能够成功地执行NF,尽管只有很少的训练。大量分析证实,成功与表达愤怒的意图有关。这对基于前额叶不对称的情感性脑机接口的设计具有积极意义。
{"title":"Anger-based BCI Using fNIRS Neurofeedback","authors":"Gabor Aranyi, Fred Charles, M. Cavazza","doi":"10.1145/2807442.2807447","DOIUrl":"https://doi.org/10.1145/2807442.2807447","url":null,"abstract":"Functional near-infrared spectroscopy (fNIRS) holds increasing potential for Brain-Computer Interfaces (BCI) due to its portability, ease of application, robustness to movement artifacts, and relatively low cost. The use of fNIRS to support the development of affective BCI has received comparatively less attention, despite the role played by the prefrontal cortex in affective control, and the appropriateness of fNIRS to measure prefrontal activity. We present an active, fNIRS-based neurofeedback (NF) interface, which uses differential changes in oxygenation between the left and right sides of the dorsolateral prefrontal cortex to operationalize BCI input. The system is activated by users generating a state of anger, which has been previously linked to increased left prefrontal asymmetry. We have incorporated this NF interface into an experimental platform adapted from a virtual 3D narrative, in which users can express anger at a virtual character perceived as evil, causing the character to disappear progressively. Eleven subjects used the system and were able to successfully perform NF despite minimal training. Extensive analysis confirms that success was associated with the intent to express anger. This has positive implications for the design of affective BCI based on prefrontal asymmetry.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125405003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Protopiper: Physically Sketching Room-Sized Objects at Actual Scale Protopiper:按实际比例绘制房间大小的物体
Harshit Agrawal, Udayan Umapathi, Róbert Kovács, Johannes Frohnhofen, Hsiang-Ting Chen, Stefanie Müller, Patrick Baudisch
Physical sketching of 3D wireframe models, using a hand-held plastic extruder, allows users to explore the design space of 3D models efficiently. Unfortunately, the scale of these devices limits users' design explorations to small-scale objects. We present protopiper, a computer aided, hand-held fabrication device, that allows users to sketch room-sized objects at actual scale. The key idea behind protopiper is that it forms adhesive tape into tubes as its main building material, rather than extruded plastic or photopolymer lines. Since the resulting tubes are hollow they offer excellent strength-to-weight ratio, thus scale well to large structures. Since the tape is pre-coated with adhesive it allows connecting tubes quickly, unlike extruded plastic that would require heating and cooling in the kilowatt range. We demonstrate protopiper's use through several demo objects, ranging from more constructive objects, such as furniture, to more decorative objects, such as statues. In our exploratory user study, 16 participants created objects based on their own ideas. They rated the device as being "useful for creative exploration", "its ability to sketch at actual scale helped judge fit", and "fun to use."
使用手持式塑料挤出机对3D线框模型进行物理素描,使用户可以高效地探索3D模型的设计空间。不幸的是,这些设备的规模限制了用户对小型物体的设计探索。我们展示了protopiper,一种计算机辅助的手持制造设备,允许用户在实际规模下绘制房间大小的物体。protopiper背后的关键思想是,它将胶带制成管状作为其主要建筑材料,而不是挤出塑料或光聚合物线条。由于所得到的管子是中空的,它们提供了出色的强度重量比,因此可以很好地扩展到大型结构。由于胶带预先涂上了粘合剂,它可以快速连接管道,而不像挤压塑料需要在千瓦范围内加热和冷却。我们通过几个演示对象来演示protopiper的使用,从更具建设性的对象(如家具)到更具装饰性的对象(如雕像)。在我们的探索性用户研究中,16名参与者根据自己的想法创建了对象。他们对这款设备的评价是“对创造性探索有用”,“它在实际尺寸上绘制草图的能力有助于判断是否合适”,以及“使用起来很有趣”。
{"title":"Protopiper: Physically Sketching Room-Sized Objects at Actual Scale","authors":"Harshit Agrawal, Udayan Umapathi, Róbert Kovács, Johannes Frohnhofen, Hsiang-Ting Chen, Stefanie Müller, Patrick Baudisch","doi":"10.1145/2807442.2807505","DOIUrl":"https://doi.org/10.1145/2807442.2807505","url":null,"abstract":"Physical sketching of 3D wireframe models, using a hand-held plastic extruder, allows users to explore the design space of 3D models efficiently. Unfortunately, the scale of these devices limits users' design explorations to small-scale objects. We present protopiper, a computer aided, hand-held fabrication device, that allows users to sketch room-sized objects at actual scale. The key idea behind protopiper is that it forms adhesive tape into tubes as its main building material, rather than extruded plastic or photopolymer lines. Since the resulting tubes are hollow they offer excellent strength-to-weight ratio, thus scale well to large structures. Since the tape is pre-coated with adhesive it allows connecting tubes quickly, unlike extruded plastic that would require heating and cooling in the kilowatt range. We demonstrate protopiper's use through several demo objects, ranging from more constructive objects, such as furniture, to more decorative objects, such as statues. In our exploratory user study, 16 participants created objects based on their own ideas. They rated the device as being \"useful for creative exploration\", \"its ability to sketch at actual scale helped judge fit\", and \"fun to use.\"","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126091910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
期刊
Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1