首页 > 最新文献

Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

英文 中文
AttachMate: highlight extraction from email attachments AttachMate:突出从电子邮件附件中提取
J. Hailpern, S. Asur, Kyle Rector
While email is a major conduit for information sharing in enterprise, there has been little work on exploring the files sent along with these messages -- attachments. These accompanying documents can be large (multiple megabytes), lengthy (multiple pages), and not optimized for the smaller screen sizes, limited reading time, and expensive bandwidth of mobile users. Thus, attachments can increase data storage costs (for both end users and email servers), drain users' time when irrelevant, cause important information to be missed when ignored, and pose a serious access issue for mobile users. To address these problems we created AttachMate, a novel email attachment summarization system. AttachMate can summarize the content of email attachments and automatically insert the summary into the text of the email. AttachMate also stores all files in the cloud, reducing file storage costs and bandwidth consumption. In this paper, the primary contribution is the AttachMate client/server architecture. To ground, support and validate the AttachMate system we present two upfront studies (813 participants) to understand the state and limitations of attachments, a novel algorithm to extract representative concept sentences (tested through two validation studies), and a user study of AttachMate within an enterprise.
虽然电子邮件是企业信息共享的主要渠道,但在探索与这些邮件一起发送的文件(附件)方面却很少有工作。这些附带的文档可能很大(几兆字节)、很长(多页),并且没有针对较小的屏幕尺寸、有限的阅读时间和昂贵的移动用户带宽进行优化。因此,附件会增加数据存储成本(对于最终用户和电子邮件服务器),在不相关的情况下消耗用户的时间,在忽略重要信息时导致遗漏,并对移动用户造成严重的访问问题。为了解决这些问题,我们创建了AttachMate,这是一个新颖的电子邮件附件汇总系统。AttachMate可以汇总电子邮件附件的内容,并自动将摘要插入到电子邮件的文本中。AttachMate还将所有文件存储在云中,降低了文件存储成本和带宽消耗。在本文中,主要贡献是AttachMate客户机/服务器体系结构。为了巩固、支持和验证AttachMate系统,我们提出了两项前期研究(813名参与者),以了解附件的状态和局限性,一种提取代表性概念句子的新算法(通过两项验证研究进行测试),以及一项企业内AttachMate的用户研究。
{"title":"AttachMate: highlight extraction from email attachments","authors":"J. Hailpern, S. Asur, Kyle Rector","doi":"10.1145/2642918.2647419","DOIUrl":"https://doi.org/10.1145/2642918.2647419","url":null,"abstract":"While email is a major conduit for information sharing in enterprise, there has been little work on exploring the files sent along with these messages -- attachments. These accompanying documents can be large (multiple megabytes), lengthy (multiple pages), and not optimized for the smaller screen sizes, limited reading time, and expensive bandwidth of mobile users. Thus, attachments can increase data storage costs (for both end users and email servers), drain users' time when irrelevant, cause important information to be missed when ignored, and pose a serious access issue for mobile users. To address these problems we created AttachMate, a novel email attachment summarization system. AttachMate can summarize the content of email attachments and automatically insert the summary into the text of the email. AttachMate also stores all files in the cloud, reducing file storage costs and bandwidth consumption. In this paper, the primary contribution is the AttachMate client/server architecture. To ground, support and validate the AttachMate system we present two upfront studies (813 participants) to understand the state and limitations of attachments, a novel algorithm to extract representative concept sentences (tested through two validation studies), and a user study of AttachMate within an enterprise.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87124268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Air+touch: interweaving touch & in-air gestures 空气+触摸:交织触摸和空中手势
Xiang 'Anthony' Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, S. Hudson
We present Air+Touch, a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air 'pigtail' to copy text to the clipboard. Through an observational study, we devised a basic taxonomy of Air+Touch interactions, based on whether the in-air component occurs before, between or after touches. To illustrate the potential of our approach, we built four applications that showcase seven exemplar Air+Touch interactions we created.
我们提出了Air+Touch,这是一种新的交互类型,它将触摸事件与空中手势交织在一起,提供了一种统一的输入方式,其表现力比每种输入方式都要强。我们展示了空气和触摸是如何高度互补的:触摸用于指定目标和分割空中手势,而空中手势为触摸事件增加了表现力。例如,用户可以在空中画一个圆圈并点击来触发上下文菜单,在两次触摸之间用手指“跳高”来选择文本区域,或者在空中拖动“辫子”来将文本复制到剪贴板上。通过一项观察研究,我们设计了空气+触摸交互的基本分类,基于空气中的成分是发生在触摸之前,之间还是之后。为了说明我们方法的潜力,我们构建了四个应用程序,展示了我们创建的七个典型的Air+Touch交互。
{"title":"Air+touch: interweaving touch & in-air gestures","authors":"Xiang 'Anthony' Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, S. Hudson","doi":"10.1145/2642918.2647392","DOIUrl":"https://doi.org/10.1145/2642918.2647392","url":null,"abstract":"We present Air+Touch, a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air 'pigtail' to copy text to the clipboard. Through an observational study, we devised a basic taxonomy of Air+Touch interactions, based on whether the in-air component occurs before, between or after touches. To illustrate the potential of our approach, we built four applications that showcase seven exemplar Air+Touch interactions we created.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86439872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 120
Declarative interaction design for data visualization 用于数据可视化的声明式交互设计
Arvind Satyanarayan, Kanit Wongsuphasawat, Jeffrey Heer
Declarative visualization grammars can accelerate development, facilitate retargeting across platforms, and allow language-level optimizations. However, existing declarative visualization languages are primarily concerned with visual encoding, and rely on imperative event handlers for interactive behaviors. In response, we introduce a model of declarative interaction design for data visualizations. Adopting methods from reactive programming, we model low-level events as composable data streams from which we form higher-level semantic signals. Signals feed predicates and scale inversions, which allow us to generalize interactive selections at the level of item geometry (pixels) into interactive queries over the data domain. Production rules then use these queries to manipulate the visualization's appearance. To facilitate reuse and sharing, these constructs can be encapsulated as named interactors: standalone, purely declarative specifications of interaction techniques. We assess our model's feasibility and expressivity by instantiating it with extensions to the Vega visualization grammar. Through a diverse range of examples, we demonstrate coverage over an established taxonomy of visualization interaction techniques.
声明式可视化语法可以加速开发,促进跨平台的重定向,并允许语言级优化。然而,现有的声明性可视化语言主要关注可视化编码,并依赖于命令式事件处理程序来进行交互行为。作为回应,我们引入了一个用于数据可视化的声明性交互设计模型。采用响应式编程的方法,我们将低级事件建模为可组合的数据流,并从中形成高级语义信号。信号提供谓词和规模反转,这使我们能够将项目几何(像素)级别的交互式选择推广到数据域上的交互式查询。然后,生产规则使用这些查询来操作可视化的外观。为了便于重用和共享,可以将这些构造封装为命名的交互器:独立的、纯粹声明性的交互技术规范。我们通过扩展Vega可视化语法实例化模型来评估模型的可行性和表达性。通过各种各样的示例,我们演示了可视化交互技术的既定分类。
{"title":"Declarative interaction design for data visualization","authors":"Arvind Satyanarayan, Kanit Wongsuphasawat, Jeffrey Heer","doi":"10.1145/2642918.2647360","DOIUrl":"https://doi.org/10.1145/2642918.2647360","url":null,"abstract":"Declarative visualization grammars can accelerate development, facilitate retargeting across platforms, and allow language-level optimizations. However, existing declarative visualization languages are primarily concerned with visual encoding, and rely on imperative event handlers for interactive behaviors. In response, we introduce a model of declarative interaction design for data visualizations. Adopting methods from reactive programming, we model low-level events as composable data streams from which we form higher-level semantic signals. Signals feed predicates and scale inversions, which allow us to generalize interactive selections at the level of item geometry (pixels) into interactive queries over the data domain. Production rules then use these queries to manipulate the visualization's appearance. To facilitate reuse and sharing, these constructs can be encapsulated as named interactors: standalone, purely declarative specifications of interaction techniques. We assess our model's feasibility and expressivity by instantiating it with extensions to the Vega visualization grammar. Through a diverse range of examples, we demonstrate coverage over an established taxonomy of visualization interaction techniques.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76175848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
Paper3D: bringing casual 3D modeling to a multi-touch interface Paper3D:将随意的3D建模带入多点触控界面
P. Paczkowski, Julie Dorsey, H. Rushmeier, Min H. Kim
A 3D modeling system that provides all-inclusive functionality is generally too demanding for a casual 3D modeler to learn. In recent years, there has been a shift towards developing more approachable systems, with easy-to-learn, intuitive interfaces. However, most modeling systems still employ mouse and keyboard interfaces, despite the ubiquity of tablet devices, and the benefits of multi-touch interfaces applied to 3D modeling. In this paper, we introduce an alternative 3D modeling paradigm for creating developable surfaces, inspired by traditional papercrafting, and implemented as a system designed from the start for a multi-touch tablet. We demonstrate the process of assembling complex 3D scenes from a collection of simpler models, in turn shaped through operations applied to sheets of virtual paper. The modeling and assembling operations mimic familiar, real-world operations performed on paper, allowing users to quickly learn our system with very little guidance. We outline key design decisions made throughout the development process, based on feedback obtained through collaboration with target users. Finally, we include a range of models created in our system.
提供包罗万象的功能的3D建模系统通常对一个普通的3D建模师来说要求太高了。近年来,人们开始转向开发更平易近人的系统,这些系统具有易于学习、直观的界面。然而,大多数建模系统仍然使用鼠标和键盘界面,尽管平板设备无处不在,以及应用于3D建模的多点触摸界面的好处。在本文中,我们介绍了一种用于创建可展开表面的替代3D建模范例,该范例受传统造纸工艺的启发,并作为一种从一开始就为多点触控平板设计的系统来实现。我们展示了组装复杂的3D场景的过程,从一个简单的模型集合,反过来通过应用于虚拟纸的操作来塑造。建模和组装操作模拟了在纸上执行的熟悉的现实世界操作,允许用户在很少的指导下快速学习我们的系统。我们根据与目标用户合作获得的反馈,概述了在整个开发过程中做出的关键设计决策。最后,我们在系统中包含了一系列的模型。
{"title":"Paper3D: bringing casual 3D modeling to a multi-touch interface","authors":"P. Paczkowski, Julie Dorsey, H. Rushmeier, Min H. Kim","doi":"10.1145/2642918.2647416","DOIUrl":"https://doi.org/10.1145/2642918.2647416","url":null,"abstract":"A 3D modeling system that provides all-inclusive functionality is generally too demanding for a casual 3D modeler to learn. In recent years, there has been a shift towards developing more approachable systems, with easy-to-learn, intuitive interfaces. However, most modeling systems still employ mouse and keyboard interfaces, despite the ubiquity of tablet devices, and the benefits of multi-touch interfaces applied to 3D modeling. In this paper, we introduce an alternative 3D modeling paradigm for creating developable surfaces, inspired by traditional papercrafting, and implemented as a system designed from the start for a multi-touch tablet. We demonstrate the process of assembling complex 3D scenes from a collection of simpler models, in turn shaped through operations applied to sheets of virtual paper. The modeling and assembling operations mimic familiar, real-world operations performed on paper, allowing users to quickly learn our system with very little guidance. We outline key design decisions made throughout the development process, based on feedback obtained through collaboration with target users. Finally, we include a range of models created in our system.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85695349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Vibkinesis: notification by direct tap and 'dying message' using vibronic movement controllable smartphones Vibkinesis:通过直接点击通知和使用振动运动可控智能手机的“死亡信息”
Shota Yamanaka, Homei Miyashita
We propose Vibkinesis, a smartphone that can control its angle and directions of movement and rotation. By separately controlling the vibration motors attached to it, the smartphone can move on a table in the direction it chooses. Vibkinesis can inform a user of a message received when the user is away from the smartphone by changing its orientation, e.g., the smartphone has rotated 90° to the left before the user returns to the smartphone. With this capability, Vibkinesis can notify the user of a message even if the battery is discharged. We also extend the sensing area of Vibkinesis by using an omni-directional lens so that the smartphone tracks the surrounding objects. This allows Vibkinesis to tap the user's hand. These novel interactions expand the mobile device's movement area, notification channels, and notification time span.
我们提出Vibkinesis,一款可以控制移动和旋转角度和方向的智能手机。通过单独控制与它相连的振动马达,智能手机可以在桌子上按自己选择的方向移动。当用户离开智能手机时,Vibkinesis可以通过改变方向来通知用户收到的消息,例如,在用户返回智能手机之前,智能手机已经向左旋转了90°。有了这个功能,即使电池没电了,Vibkinesis也可以通知用户一条消息。我们还通过使用全方位镜头扩展了Vibkinesis的传感区域,以便智能手机跟踪周围的物体。这样Vibkinesis就可以轻拍用户的手。这些新颖的交互扩展了移动设备的移动区域、通知通道和通知时间跨度。
{"title":"Vibkinesis: notification by direct tap and 'dying message' using vibronic movement controllable smartphones","authors":"Shota Yamanaka, Homei Miyashita","doi":"10.1145/2642918.2647365","DOIUrl":"https://doi.org/10.1145/2642918.2647365","url":null,"abstract":"We propose Vibkinesis, a smartphone that can control its angle and directions of movement and rotation. By separately controlling the vibration motors attached to it, the smartphone can move on a table in the direction it chooses. Vibkinesis can inform a user of a message received when the user is away from the smartphone by changing its orientation, e.g., the smartphone has rotated 90° to the left before the user returns to the smartphone. With this capability, Vibkinesis can notify the user of a message even if the battery is discharged. We also extend the sensing area of Vibkinesis by using an omni-directional lens so that the smartphone tracks the surrounding objects. This allows Vibkinesis to tap the user's hand. These novel interactions expand the mobile device's movement area, notification channels, and notification time span.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89616654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Laevo: a temporal desktop interface for integrated knowledge work Laevo:用于集成知识工作的临时桌面界面
S. Jeuris, Steven Houben, J. Bardram
Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual configuration work to integrate the different tools they use. In order to understand tool usage, we review literature on how users' activities are created and evolve over time as part of knowledge worker practices. From this we derive the activity life cycle, a conceptual framework describing the different states and transitions of an activity. The life cycle is used to inform the design of Laevo, a temporal activity-centric desktop interface for personal knowledge work. Laevo allows users to structure work within dedicated workspaces, managed on a timeline. Through a centralized notification system which doubles as a to-do list, incoming interruptions can be handled. Our field study indicates how highlighting the temporal nature of activities results in lightweight scalable activity management, while making users more aware about their ongoing and planned work.
先前的研究表明,知识工作的特点是高度相互关联的实践,包括任务、文件和窗口管理。然而,现有的个人信息管理工具主要关注有限的知识工作子集,迫使用户执行额外的手动配置工作来集成他们使用的不同工具。为了理解工具的使用,我们回顾了关于用户活动如何作为知识工作者实践的一部分被创建和发展的文献。由此我们推导出活动生命周期,这是一个描述活动的不同状态和转换的概念框架。生命周期用于通知Laevo的设计,Laevo是一个用于个人知识工作的以活动为中心的临时桌面界面。Laevo允许用户在专门的工作空间内组织工作,并按时间轴进行管理。通过一个集中的通知系统,它可以兼作待办事项列表,可以处理传入的中断。我们的实地研究表明,突出活动的时间性质如何导致轻量级的可伸缩活动管理,同时使用户更加了解他们正在进行的和计划中的工作。
{"title":"Laevo: a temporal desktop interface for integrated knowledge work","authors":"S. Jeuris, Steven Houben, J. Bardram","doi":"10.1145/2642918.2647391","DOIUrl":"https://doi.org/10.1145/2642918.2647391","url":null,"abstract":"Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual configuration work to integrate the different tools they use. In order to understand tool usage, we review literature on how users' activities are created and evolve over time as part of knowledge worker practices. From this we derive the activity life cycle, a conceptual framework describing the different states and transitions of an activity. The life cycle is used to inform the design of Laevo, a temporal activity-centric desktop interface for personal knowledge work. Laevo allows users to structure work within dedicated workspaces, managed on a timeline. Through a centralized notification system which doubles as a to-do list, incoming interruptions can be handled. Our field study indicates how highlighting the temporal nature of activities results in lightweight scalable activity management, while making users more aware about their ongoing and planned work.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87699900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Gaze-touch: combining gaze with multi-touch for interaction on the same surface 注视触摸:将注视与多点触摸相结合,在同一表面上进行交互
Ken Pfeuffer, Jason Alexander, M. K. Chong, Hans-Werner Gellersen
Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of 'gaze selects, touch manipulates'. Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement direct-touch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gaze-touch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch.
凝视有可能补充多点触控在同一表面上的互动。我们提出了一种基于“凝视选择,触摸操纵”的原则,结合了两种模式的凝视触摸技术。凝视是用来选择目标的,加上多点触摸手势,用户可以在表面的任何地方执行。凝视触摸使用户能够从相同的触摸位置操纵任何目标,实现全表面可达性和快速上下文切换。相反,凝视触摸可以从表面上的任何触摸位置操作相同的目标,例如避免遮挡。凝视触摸是为了补充直接触摸作为默认交互在多点触摸表面。我们对凝视触摸与直接触摸的特性进行了设计空间分析,并提出了四个应用程序,探索凝视触摸如何与直接触摸一起使用。这些应用程序展示了两种交互模式的可互换、互补和替代使用的用例,并介绍了由凝视触摸和传统多点触摸结合而产生的新技术。
{"title":"Gaze-touch: combining gaze with multi-touch for interaction on the same surface","authors":"Ken Pfeuffer, Jason Alexander, M. K. Chong, Hans-Werner Gellersen","doi":"10.1145/2642918.2647397","DOIUrl":"https://doi.org/10.1145/2642918.2647397","url":null,"abstract":"Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of 'gaze selects, touch manipulates'. Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement direct-touch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gaze-touch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76865180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
CommandSpace: modeling the relationships between tasks, descriptions and features CommandSpace:对任务、描述和特性之间的关系进行建模
Eytan Adar, Mira Dontcheva, Gierad Laput
Users often describe what they want to accomplish with an application in a language that is very different from the application's domain language. To address this gap between system and human language, we propose modeling an application's domain language by mining a large corpus of Web documents about the application using deep learning techniques. A high dimensional vector space representation can model the relationships between user tasks, system commands, and natural language descriptions and supports mapping operations, such as identifying likely system commands given natural language queries and identifying user tasks given a trace of user operations. We demonstrate the feasibility of this approach with a system, CommandSpace, for the popular photo editing application Adobe Photoshop. We build and evaluate several applications enabled by our model showing the power and flexibility of this approach.
用户经常用与应用程序领域语言非常不同的语言来描述他们想要用应用程序完成的任务。为了解决系统语言和人类语言之间的差距,我们建议使用深度学习技术挖掘关于应用程序的大量Web文档语料库,从而对应用程序的领域语言进行建模。高维向量空间表示可以对用户任务、系统命令和自然语言描述之间的关系进行建模,并支持映射操作,例如在给定自然语言查询的情况下识别可能的系统命令,以及在给定用户操作跟踪的情况下识别用户任务。我们用一个系统CommandSpace演示了这种方法的可行性,该系统用于流行的照片编辑应用程序Adobe Photoshop。我们构建并评估了模型支持的几个应用程序,展示了这种方法的强大功能和灵活性。
{"title":"CommandSpace: modeling the relationships between tasks, descriptions and features","authors":"Eytan Adar, Mira Dontcheva, Gierad Laput","doi":"10.1145/2642918.2647395","DOIUrl":"https://doi.org/10.1145/2642918.2647395","url":null,"abstract":"Users often describe what they want to accomplish with an application in a language that is very different from the application's domain language. To address this gap between system and human language, we propose modeling an application's domain language by mining a large corpus of Web documents about the application using deep learning techniques. A high dimensional vector space representation can model the relationships between user tasks, system commands, and natural language descriptions and supports mapping operations, such as identifying likely system commands given natural language queries and identifying user tasks given a trace of user operations. We demonstrate the feasibility of this approach with a system, CommandSpace, for the popular photo editing application Adobe Photoshop. We build and evaluate several applications enabled by our model showing the power and flexibility of this approach.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73423520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Cheaper by the dozen: group annotation of 3D data 更便宜:3D数据的组注释
A. Boyko, T. Funkhouser
This paper proposes a group annotation approach to interactive semantic labeling of data and demonstrates the idea in a system for labeling objects in 3D LiDAR scans of a city. In this approach, the system selects a group of objects, predicts a semantic label for it, and highlights it in an interactive display. In response, the user either confirms the predicted label, provides a different label, or indicates that no single label can be assigned to all objects in the group. This sequence of interactions repeats until a label has been confirmed for every object in the data set. The main advantage of this approach is that it provides faster interactive labeling rates than alternative approaches, especially in cases where all labels must be explicitly confirmed by a person. The main challenge is to provide an algorithm that selects groups with many objects all of the same label type arranged in patterns that are quick to recognize, which requires models for predicting object labels and for estimating times for people to recognize objects in groups. We address these challenges by defining an objective function that models the estimated time required to process all unlabeled objects and approximation algorithms to minimize it. Results of user studies suggest that group annotation can be used to label objects in LiDAR scans of cities significantly faster than one-by-one annotation with active learning.
本文提出了一种交互式数据语义标注的分组标注方法,并以城市三维激光雷达扫描中物体标注系统为例进行了演示。在这种方法中,系统选择一组对象,预测其语义标签,并在交互式显示中突出显示它。作为响应,用户要么确认预测的标签,要么提供不同的标签,要么指示不能将单个标签分配给组中的所有对象。这个交互序列会重复,直到为数据集中的每个对象确认了一个标签。这种方法的主要优点是,它提供了比其他方法更快的交互标记速率,特别是在所有标签必须由一个人明确确认的情况下。主要的挑战是提供一种算法来选择具有许多相同标签类型的对象的组,这些对象以快速识别的模式排列,这需要预测对象标签和估计人们识别组中对象的时间的模型。我们通过定义一个目标函数来解决这些挑战,该目标函数对处理所有未标记对象所需的估计时间进行建模,并使用近似算法将其最小化。用户研究结果表明,在城市激光雷达扫描中,分组标注比主动学习的逐个标注要快得多。
{"title":"Cheaper by the dozen: group annotation of 3D data","authors":"A. Boyko, T. Funkhouser","doi":"10.1145/2642918.2647418","DOIUrl":"https://doi.org/10.1145/2642918.2647418","url":null,"abstract":"This paper proposes a group annotation approach to interactive semantic labeling of data and demonstrates the idea in a system for labeling objects in 3D LiDAR scans of a city. In this approach, the system selects a group of objects, predicts a semantic label for it, and highlights it in an interactive display. In response, the user either confirms the predicted label, provides a different label, or indicates that no single label can be assigned to all objects in the group. This sequence of interactions repeats until a label has been confirmed for every object in the data set. The main advantage of this approach is that it provides faster interactive labeling rates than alternative approaches, especially in cases where all labels must be explicitly confirmed by a person. The main challenge is to provide an algorithm that selects groups with many objects all of the same label type arranged in patterns that are quick to recognize, which requires models for predicting object labels and for estimating times for people to recognize objects in groups. We address these challenges by defining an objective function that models the estimated time required to process all unlabeled objects and approximation algorithms to minimize it. Results of user studies suggest that group annotation can be used to label objects in LiDAR scans of cities significantly faster than one-by-one annotation with active learning.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79732025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A series of tubes: adding interactivity to 3D prints using internal pipes 一系列管道:使用内部管道为3D打印增加交互性
Valkyrie Savage, Ryan M. Schmidt, Tovi Grossman, G. Fitzmaurice, Bjoern Hartmann
3D printers offer extraordinary flexibility for prototyping the shape and mechanical function of objects. We investigate how 3D models can be modified to facilitate the creation of interactive objects that offer dynamic input and output. We introduce a general technique for supporting the rapid prototyping of interactivity by removing interior material from 3D models to form internal pipes. We describe this new design space of pipes for interaction design, where variables include openings, path constraints, topologies, and inserted media. We then present PipeDream, a tool for routing such pipes through the interior of 3D models, integrated within a 3D modeling program. We use two distinct routing algorithms. The first has users define pipes' terminals, and uses path routing and physics-based simulation to minimize pipe bending energy, allowing easy insertion of media post-print. The second allows users to supply a desired internal shape to which we fit a pipe route: for this we describe a graph-routing algorithm. We present several prototypes created using our tool to show its flexibility and potential.
3D打印机为物体的形状和机械功能的原型制作提供了非凡的灵活性。我们研究如何修改3D模型,以促进提供动态输入和输出的交互式对象的创建。我们介绍了一种通用技术,通过从3D模型中移除内部材料来形成内部管道,从而支持交互性的快速原型。我们描述了这种用于交互设计的管道的新设计空间,其中的变量包括开口、路径约束、拓扑和插入的媒体。然后,我们提出了PipeDream,这是一种通过3D模型内部路由此类管道的工具,集成在3D建模程序中。我们使用两种不同的路由算法。首先,用户可以定义管道终端,并使用路径路由和基于物理的模拟来最大限度地减少管道弯曲能量,允许轻松插入介质后打印。第二种方法允许用户提供所需的内部形状,我们将其用于管道路由:为此,我们描述了一个图路由算法。我们展示了几个使用我们的工具创建的原型,以展示其灵活性和潜力。
{"title":"A series of tubes: adding interactivity to 3D prints using internal pipes","authors":"Valkyrie Savage, Ryan M. Schmidt, Tovi Grossman, G. Fitzmaurice, Bjoern Hartmann","doi":"10.1145/2642918.2647374","DOIUrl":"https://doi.org/10.1145/2642918.2647374","url":null,"abstract":"3D printers offer extraordinary flexibility for prototyping the shape and mechanical function of objects. We investigate how 3D models can be modified to facilitate the creation of interactive objects that offer dynamic input and output. We introduce a general technique for supporting the rapid prototyping of interactivity by removing interior material from 3D models to form internal pipes. We describe this new design space of pipes for interaction design, where variables include openings, path constraints, topologies, and inserted media. We then present PipeDream, a tool for routing such pipes through the interior of 3D models, integrated within a 3D modeling program. We use two distinct routing algorithms. The first has users define pipes' terminals, and uses path routing and physics-based simulation to minimize pipe bending energy, allowing easy insertion of media post-print. The second allows users to supply a desired internal shape to which we fit a pipe route: for this we describe a graph-routing algorithm. We present several prototypes created using our tool to show its flexibility and potential.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91246995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
期刊
Proceedings of the 27th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1