首页 > 最新文献

Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

英文 中文
Prefab layers and prefab annotations: extensible pixel-based interpretation of graphical interfaces 预制层和预制注释:可扩展的基于像素的图形界面解释
M. Dixon, A. C. Nied, J. Fogarty
Pixel-based methods have the potential to fundamentally change how we build graphical interfaces, but remain difficult to implement. We introduce a new toolkit for pixel based enhancements, focused on two areas of support. Prefab Layers helps developers write interpretation logic that can be composed, reused, and shared to manage the multi-faceted nature of pixel-based interpretation. Prefab Annotations supports robustly annotating interface elements with metadata needed to enable runtime enhancements. Together, these help developers overcome subtle but critical dependencies between code and data. We validate our toolkit with (1) demonstrative applications and (2) a lab study that compares how developers build an enhancement using our toolkit versus state of the art methods. Our toolkit addresses core challenges faced by developers when building pixel based enhancements, potentially opening up pixel based systems to broader adoption.
基于像素的方法有可能从根本上改变我们构建图形界面的方式,但仍然难以实现。我们为基于像素的增强引入了一个新的工具包,重点关注两个方面的支持。Prefab Layers帮助开发人员编写可以组合、重用和共享的解释逻辑,以管理基于像素的解释的多面性。Prefab Annotations支持用启用运行时增强功能所需的元数据健壮地注释接口元素。总之,这些可以帮助开发人员克服代码和数据之间微妙但关键的依赖关系。我们通过(1)示范应用程序和(2)实验室研究来验证我们的工具包,该研究比较了开发人员如何使用我们的工具包和最先进的方法来构建增强功能。我们的工具包解决了开发人员在构建基于像素的增强功能时面临的核心挑战,有可能使基于像素的系统得到更广泛的采用。
{"title":"Prefab layers and prefab annotations: extensible pixel-based interpretation of graphical interfaces","authors":"M. Dixon, A. C. Nied, J. Fogarty","doi":"10.1145/2642918.2647412","DOIUrl":"https://doi.org/10.1145/2642918.2647412","url":null,"abstract":"Pixel-based methods have the potential to fundamentally change how we build graphical interfaces, but remain difficult to implement. We introduce a new toolkit for pixel based enhancements, focused on two areas of support. Prefab Layers helps developers write interpretation logic that can be composed, reused, and shared to manage the multi-faceted nature of pixel-based interpretation. Prefab Annotations supports robustly annotating interface elements with metadata needed to enable runtime enhancements. Together, these help developers overcome subtle but critical dependencies between code and data. We validate our toolkit with (1) demonstrative applications and (2) a lab study that compares how developers build an enhancement using our toolkit versus state of the art methods. Our toolkit addresses core challenges faced by developers when building pixel based enhancements, potentially opening up pixel based systems to broader adoption.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86421711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
3D-board: a whole-body remote collaborative whiteboard 3D-board:全身远程协作白板
Jakob Zillner, Christoph Rhemann, S. Izadi, M. Haller
This paper presents 3D-Board, a digital whiteboard capable of capturing life-sized virtual embodiments of geographically distributed users. When using large-scale screens for remote collaboration, awareness for the distributed users' gestures and actions is of particular importance. Our work adds to the literature on remote collaborative workspaces, it facilitates intuitive remote collaboration on large scale interactive whiteboards by preserving awareness of the full-body pose and gestures of the remote collaborator. By blending the front-facing 3D embodiment of a remote collaborator with the shared workspace, an illusion is created as if the observer was looking through the transparent whiteboard into the remote user's room. The system was tested and verified in a usability assessment, showing that 3D-Board significantly improves the effectiveness of remote collaboration on a large interactive surface.
本文介绍了3D-Board,一种能够捕获地理分布用户的真人大小的虚拟实施例的数字白板。当使用大型屏幕进行远程协作时,对分布式用户的手势和动作的感知尤为重要。我们的工作增加了远程协作工作空间的文献,它通过保留远程合作者的全身姿势和手势的意识,促进了大规模交互式白板上的直观远程协作。通过将远程协作者的正面3D化身与共享工作空间混合在一起,就会产生一种错觉,仿佛观察者正在通过透明的白板看到远程用户的房间。该系统在可用性评估中进行了测试和验证,表明3D-Board显着提高了大型交互表面上远程协作的有效性。
{"title":"3D-board: a whole-body remote collaborative whiteboard","authors":"Jakob Zillner, Christoph Rhemann, S. Izadi, M. Haller","doi":"10.1145/2642918.2647393","DOIUrl":"https://doi.org/10.1145/2642918.2647393","url":null,"abstract":"This paper presents 3D-Board, a digital whiteboard capable of capturing life-sized virtual embodiments of geographically distributed users. When using large-scale screens for remote collaboration, awareness for the distributed users' gestures and actions is of particular importance. Our work adds to the literature on remote collaborative workspaces, it facilitates intuitive remote collaboration on large scale interactive whiteboards by preserving awareness of the full-body pose and gestures of the remote collaborator. By blending the front-facing 3D embodiment of a remote collaborator with the shared workspace, an illusion is created as if the observer was looking through the transparent whiteboard into the remote user's room. The system was tested and verified in a usability assessment, showing that 3D-Board significantly improves the effectiveness of remote collaboration on a large interactive surface.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77150495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
FlatFitFab: interactive modeling with planar sections FlatFitFab:平面剖面的交互式建模
James McCrae, Nobuyuki Umetani, Karan Singh
We present a comprehensive system to author planar section structures, common in art and engineering. A study on how planar section assemblies are imagined and drawn guide our design principles: planar sections are best drawn in-situ, with little foreshortening, orthogonal to intersecting planar sections, exhibiting regularities between planes and contours. We capture these principles with a novel drawing workflow where a single fluid user stroke specifies a 3D plane and its contour in relation to existing planar sections. Regularity is supported by defining a vocabulary of procedural operations for intersecting planar sections. We exploit planar structure properties to provide real-time visual feedback on physically simulated stresses, and geometric verification that the structure is stable, connected and can be assembled. This feedback is validated by real-world fabrication and testing. As evaluation, we report on over 50 subjects who all used our system with minimal instruction to create unique models.
我们提出了一套完整的平面截面结构创作系统,它在艺术和工程中都很常见。对平面截面组合体的想象和绘制的研究指导了我们的设计原则:平面截面最好在现场绘制,少缩短,与相交的平面截面正交,显示平面和轮廓之间的规律性。我们通过新颖的绘图工作流程捕获这些原则,其中单个流体用户笔画指定3D平面及其与现有平面部分相关的轮廓。通过定义平面相交部分的程序操作词汇表来支持规则性。我们利用平面结构的特性来提供物理模拟应力的实时视觉反馈,以及结构稳定、连接和可组装的几何验证。该反馈通过真实世界的制造和测试得到验证。作为评估,我们报告了50多名受试者,他们都使用我们的系统,以最少的指导来创建独特的模型。
{"title":"FlatFitFab: interactive modeling with planar sections","authors":"James McCrae, Nobuyuki Umetani, Karan Singh","doi":"10.1145/2642918.2647388","DOIUrl":"https://doi.org/10.1145/2642918.2647388","url":null,"abstract":"We present a comprehensive system to author planar section structures, common in art and engineering. A study on how planar section assemblies are imagined and drawn guide our design principles: planar sections are best drawn in-situ, with little foreshortening, orthogonal to intersecting planar sections, exhibiting regularities between planes and contours. We capture these principles with a novel drawing workflow where a single fluid user stroke specifies a 3D plane and its contour in relation to existing planar sections. Regularity is supported by defining a vocabulary of procedural operations for intersecting planar sections. We exploit planar structure properties to provide real-time visual feedback on physically simulated stresses, and geometric verification that the structure is stable, connected and can be assembled. This feedback is validated by real-world fabrication and testing. As evaluation, we report on over 50 subjects who all used our system with minimal instruction to create unique models.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88090797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
Video digests: a browsable, skimmable format for informational lecture videos 视频摘要:一个可浏览的,可略读格式的信息讲座视频
Amy Pavel, Colorado Reed, Bjoern Hartmann, Maneesh Agrawala
Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos using current timeline-based video players. Video digests are a new format for informational videos that afford browsing and skimming by segmenting videos into a chapter/section structure and providing short text summaries and thumbnails for each section. Viewers can navigate by reading the summaries and clicking on sections to access the corresponding point in the video. We present a set of tools to help authors create such digests using transcript-based interactions. With our tools, authors can manually create a video digest from scratch, or they can automatically generate a digest by applying a combination of algorithmic and crowdsourcing techniques and then manually refine it as needed. Feedback from first-time users suggests that our transcript-based authoring tools and automated techniques greatly facilitate video digest creation. In an evaluative crowdsourced study we find that given a short viewing time, video digests support browsing and skimming better than timeline-based or transcript-based video players.
越来越多的作者在网上发布长篇信息讲座、讲座和远程学习视频。然而,使用当前基于时间轴的视频播放器很难浏览和略读这些视频的内容。视频摘要是一种新的信息视频格式,通过将视频分割成章节结构,并为每个部分提供简短的文本摘要和缩略图,可以浏览和略读。观众可以通过阅读摘要和点击部分来浏览视频中的相应点。我们提供了一组工具来帮助作者使用基于转录的交互来创建这样的摘要。使用我们的工具,作者可以从头开始手动创建视频摘要,或者他们可以通过应用算法和众包技术的组合自动生成摘要,然后根据需要手动改进它。来自首次用户的反馈表明,我们基于转录的创作工具和自动化技术极大地促进了视频摘要的创建。在一项评估性众包研究中,我们发现在较短的观看时间内,视频摘要比基于时间轴或基于转录的视频播放器更支持浏览和略读。
{"title":"Video digests: a browsable, skimmable format for informational lecture videos","authors":"Amy Pavel, Colorado Reed, Bjoern Hartmann, Maneesh Agrawala","doi":"10.1145/2642918.2647400","DOIUrl":"https://doi.org/10.1145/2642918.2647400","url":null,"abstract":"Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos using current timeline-based video players. Video digests are a new format for informational videos that afford browsing and skimming by segmenting videos into a chapter/section structure and providing short text summaries and thumbnails for each section. Viewers can navigate by reading the summaries and clicking on sections to access the corresponding point in the video. We present a set of tools to help authors create such digests using transcript-based interactions. With our tools, authors can manually create a video digest from scratch, or they can automatically generate a digest by applying a combination of algorithmic and crowdsourcing techniques and then manually refine it as needed. Feedback from first-time users suggests that our transcript-based authoring tools and automated techniques greatly facilitate video digest creation. In an evaluative crowdsourced study we find that given a short viewing time, video digests support browsing and skimming better than timeline-based or transcript-based video players.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86820186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
InterTwine: creating interapplication information scent to support coordinated use of software 交织:创建应用程序间的信息气味,以支持软件的协调使用
Adam Fourney, B. Lafreniere, Parmit K. Chilana, Michael A. Terry
Users often make continued and sustained use of online resources to complement use of a desktop application. For example, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide separate, independent mechanisms for helping users find and re-find task-relevant information. In this paper, we describe InterTwine, a system that links information in the web browser with relevant elements in the desktop application to create interapplication information scent. This explicit link produces a shared interapplication history to assist in re-finding information in both applications. As an example, InterTwine marks all menu items in the desktop application that are currently mentioned in the front-most web page. This paper introduces the notion of interapplication information scent, demonstrates the concept in InterTwine, and describes results from a formative study suggesting the utility of the concept.
用户经常持续地使用在线资源来补充桌面应用程序的使用。例如,用户可以参考在线教程来回忆如何执行特定的任务。虽然经常以协调的方式使用,但浏览器和桌面应用程序提供了独立的机制来帮助用户查找和重新查找与任务相关的信息。在本文中,我们描述了一个将web浏览器中的信息与桌面应用程序中的相关元素链接起来以创建应用程序间信息气味的系统。这种显式链接产生共享的应用程序间历史记录,以帮助在两个应用程序中重新查找信息。例如,在桌面应用程序中,缠结标记了当前在最前面的网页中提到的所有菜单项。本文介绍了应用间信息气味的概念,在entangine中对其进行了论证,并描述了一项形成性研究的结果,说明了该概念的实用性。
{"title":"InterTwine: creating interapplication information scent to support coordinated use of software","authors":"Adam Fourney, B. Lafreniere, Parmit K. Chilana, Michael A. Terry","doi":"10.1145/2642918.2647420","DOIUrl":"https://doi.org/10.1145/2642918.2647420","url":null,"abstract":"Users often make continued and sustained use of online resources to complement use of a desktop application. For example, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide separate, independent mechanisms for helping users find and re-find task-relevant information. In this paper, we describe InterTwine, a system that links information in the web browser with relevant elements in the desktop application to create interapplication information scent. This explicit link produces a shared interapplication history to assist in re-finding information in both applications. As an example, InterTwine marks all menu items in the desktop application that are currently mentioned in the front-most web page. This paper introduces the notion of interapplication information scent, demonstrates the concept in InterTwine, and describes results from a formative study suggesting the utility of the concept.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85774066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Programming by manipulation for layout 通过布局操作进行编程
Thibaud Hottelier, R. Bodík, Kimiko Ryokai
We present Programming by Manipulation, a new programming methodology for specifying the layout of data visualizations, targeted at non-programmers. We address the two central sources of bugs that arise when programming with constraints: ambiguities and conflicts (inconsistencies). We rule out conflicts by design and exploit ambiguity to explore possible layout designs. Our users design layouts by highlighting undesirable aspects of a current design, effectively breaking spurious constraints and introducing ambiguity by giving some elements freedom to move or resize. Subsequently, the tool indicates how the ambiguity can be removed, by computing how the free elements can be fixed with available constraints. To support this workflow, our tool computes the ambiguity and summarizes it visually. We evaluate our work with two user-studies demonstrating that both non-programmers and programmers can effectively use our prototype. Our results suggest that our tool is 5-times more productive than direct programming with constraints.
我们提出了一种新的编程方法,用于指定数据可视化的布局,针对非程序员。我们解决了在使用约束进行编程时出现的两个主要bug来源:歧义和冲突(不一致)。我们通过设计排除冲突,并利用模糊性来探索可能的布局设计。我们的用户通过突出当前设计中不受欢迎的方面来设计布局,有效地打破虚假的限制,并通过给一些元素自由移动或调整大小来引入模糊性。随后,该工具通过计算如何使用可用的约束来固定自由元素来指示如何消除歧义。为了支持这个工作流程,我们的工具计算歧义并可视化地总结它。我们通过两个用户研究来评估我们的工作,证明非程序员和程序员都可以有效地使用我们的原型。我们的结果表明,我们的工具比带约束的直接编程效率高5倍。
{"title":"Programming by manipulation for layout","authors":"Thibaud Hottelier, R. Bodík, Kimiko Ryokai","doi":"10.1145/2642918.2647378","DOIUrl":"https://doi.org/10.1145/2642918.2647378","url":null,"abstract":"We present Programming by Manipulation, a new programming methodology for specifying the layout of data visualizations, targeted at non-programmers. We address the two central sources of bugs that arise when programming with constraints: ambiguities and conflicts (inconsistencies). We rule out conflicts by design and exploit ambiguity to explore possible layout designs. Our users design layouts by highlighting undesirable aspects of a current design, effectively breaking spurious constraints and introducing ambiguity by giving some elements freedom to move or resize. Subsequently, the tool indicates how the ambiguity can be removed, by computing how the free elements can be fixed with available constraints. To support this workflow, our tool computes the ambiguity and summarizes it visually. We evaluate our work with two user-studies demonstrating that both non-programmers and programmers can effectively use our prototype. Our results suggest that our tool is 5-times more productive than direct programming with constraints.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77202021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Humane representation of thought: a trail map for the 21st century 思想的人文表现:21世纪的轨迹图
Bret Victor
New representations of thought -- written language, mathematical notation, information graphics, etc -- have been responsible for some of the most significant leaps in the progress of civilization, by expanding humanity's collectively-thinkable territory. But at debilitating cost. These representations, having been invented for static media such as paper, tap into a small subset of human capabilities and neglect the rest. Knowledge work means sitting at a desk, interpreting and manipulating symbols. The human body is reduced to an eye staring at tiny rectangles and fingers on a pen or keyboard. Like any severely unbalanced way of living, this is crippling to mind and body. But less obviously, and more importantly, it is enormously wasteful of the vast human potential. Human beings naturally have many powerful modes of thinking and understanding. Most are incompatible with static media. In a culture that has contorted itself around the limitations of marks on paper, these modes are undeveloped, unrecognized, or scorned. We are now seeing the start of a dynamic medium. To a large extent, people today are using this medium merely to emulate and extend static representations from the era of paper, and to further constrain the ways in which the human body can interact with external representations of thought. But the dynamic medium offers the opportunity to deliberately invent a humane and empowering form of knowledge work. We can design dynamic representations which draw on the entire range of human capabilities -- all senses, all forms of movement, all forms of understanding -- instead of straining a few and atrophying the rest. This talk suggests how each of the human activities in which thought is externalized (conversing, presenting, reading, writing, etc) can be redesigned around such representations.
新的思想表现形式——书面语言、数学符号、信息图形等——通过扩大人类集体可思考的领域,促成了文明进步中一些最重大的飞跃。但代价是令人衰弱的。这些表示是为纸张等静态媒体发明的,只挖掘了人类能力的一小部分,而忽略了其他部分。知识工作意味着坐在办公桌前,解释和操作符号。人类的身体被简化为一只眼睛盯着小矩形和手指在笔或键盘上。就像任何严重失衡的生活方式一样,这对身心都是有害的。但不那么明显,但更重要的是,它极大地浪费了人类的巨大潜力。人类天生就有许多强大的思维和理解模式。大多数与静态媒体不兼容。在一个围绕纸上标记的局限性而扭曲自身的文化中,这些模式是不发达的,不被认可的,或者是被蔑视的。我们现在看到了一个动态媒介的开始。在很大程度上,今天人们使用这种媒介只是为了模仿和扩展纸时代的静态表征,并进一步限制人体与外部思想表征的互动方式。但是,动态媒体提供了一个机会,可以有意地发明一种人性化和赋权的知识工作形式。我们可以设计出动态的表现形式,利用人类所有的能力——所有的感官,所有形式的运动,所有形式的理解——而不是使一些人紧张而使其余的人萎缩。这个演讲表明,思想外化的每一个人类活动(交谈、呈现、阅读、写作等)都可以围绕这样的表现重新设计。
{"title":"Humane representation of thought: a trail map for the 21st century","authors":"Bret Victor","doi":"10.1145/2642918.2642920","DOIUrl":"https://doi.org/10.1145/2642918.2642920","url":null,"abstract":"New representations of thought -- written language, mathematical notation, information graphics, etc -- have been responsible for some of the most significant leaps in the progress of civilization, by expanding humanity's collectively-thinkable territory. But at debilitating cost. These representations, having been invented for static media such as paper, tap into a small subset of human capabilities and neglect the rest. Knowledge work means sitting at a desk, interpreting and manipulating symbols. The human body is reduced to an eye staring at tiny rectangles and fingers on a pen or keyboard. Like any severely unbalanced way of living, this is crippling to mind and body. But less obviously, and more importantly, it is enormously wasteful of the vast human potential. Human beings naturally have many powerful modes of thinking and understanding. Most are incompatible with static media. In a culture that has contorted itself around the limitations of marks on paper, these modes are undeveloped, unrecognized, or scorned. We are now seeing the start of a dynamic medium. To a large extent, people today are using this medium merely to emulate and extend static representations from the era of paper, and to further constrain the ways in which the human body can interact with external representations of thought. But the dynamic medium offers the opportunity to deliberately invent a humane and empowering form of knowledge work. We can design dynamic representations which draw on the entire range of human capabilities -- all senses, all forms of movement, all forms of understanding -- instead of straining a few and atrophying the rest. This talk suggests how each of the human activities in which thought is externalized (conversing, presenting, reading, writing, etc) can be redesigned around such representations.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82811547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
RichReview: blending ink, speech, and gesture to support collaborative document review RichReview:混合墨水、语音和手势来支持协作文档审查
Dongwook Yoon, Nicholas Chen, François Guimbretière, A. Sellen
This paper introduces a novel document annotation system that aims to enable the kinds of rich communication that usually only occur in face-to-face meetings. Our system, RichReview, lets users create annotations on top of digital documents using three main modalities: freeform inking, voice for narration, and deictic gestures in support of voice. RichReview uses novel visual representations and time-synchronization between modalities to simplify annotation access and navigation. Moreover, RichReview's versatile support for multi-modal annotations enables users to mix and interweave different modalities in threaded conversations. A formative evaluation demonstrates early promise for the system finding support for voice, pointing, and the combination of both to be especially valuable. In addition, initial findings point to the ways in which both content and social context affect modality choice.
本文介绍了一种新的文档注释系统,旨在实现通常只在面对面会议中发生的丰富交流。我们的系统RichReview允许用户使用三种主要方式在数字文档上创建注释:自由形式的墨水、语音叙述和支持语音的指示手势。RichReview使用新颖的视觉表示和模式之间的时间同步来简化注释访问和导航。此外,RichReview对多模态注释的通用支持使用户能够在线程对话中混合和交织不同的模态。形成性评估显示了系统的早期承诺,发现对声音、指向以及两者的组合的支持是特别有价值的。此外,初步的研究结果指出,内容和社会背景影响方式的选择。
{"title":"RichReview: blending ink, speech, and gesture to support collaborative document review","authors":"Dongwook Yoon, Nicholas Chen, François Guimbretière, A. Sellen","doi":"10.1145/2642918.2647390","DOIUrl":"https://doi.org/10.1145/2642918.2647390","url":null,"abstract":"This paper introduces a novel document annotation system that aims to enable the kinds of rich communication that usually only occur in face-to-face meetings. Our system, RichReview, lets users create annotations on top of digital documents using three main modalities: freeform inking, voice for narration, and deictic gestures in support of voice. RichReview uses novel visual representations and time-synchronization between modalities to simplify annotation access and navigation. Moreover, RichReview's versatile support for multi-modal annotations enables users to mix and interweave different modalities in threaded conversations. A formative evaluation demonstrates early promise for the system finding support for voice, pointing, and the combination of both to be especially valuable. In addition, initial findings point to the ways in which both content and social context affect modality choice.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76969717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Kitty: sketching dynamic and interactive illustrations 凯蒂:素描动态和互动的插图
Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, G. Fitzmaurice
We present Kitty, a sketch-based tool for authoring dynamic and interactive illustrations. Artists can sketch animated drawings and textures to convey the living phenomena, and specify the functional relationship between its entities to characterize the dynamic behavior of systems and environments. An underlying graph model, customizable through sketching, captures the functional relationships between the visual, spatial, temporal or quantitative parameters of its entities. As the viewer interacts with the resulting dynamic interactive illustration, the parameters of the drawing change accordingly, depicting the dynamics and chain of causal effects within a scene. The generality of this framework makes our tool applicable for a variety of purposes, including technical illustrations, scientific explanation, infographics, medical illustrations, children's e-books, cartoon strips and beyond. A user study demonstrates the ease of usage, variety of applications, artistic expressiveness and creative possibilities of our tool.
我们介绍了Kitty,一个基于草图的工具,用于创作动态和交互式插图。艺术家可以勾画出生动的图画和纹理来传达生活现象,并指定其实体之间的功能关系,以表征系统和环境的动态行为。底层图形模型,可通过草图定制,捕获其实体的视觉、空间、时间或数量参数之间的功能关系。当观看者与由此产生的动态交互插图交互时,绘图的参数相应变化,描绘场景中的动态和因果关系链。这个框架的通用性使我们的工具适用于各种目的,包括技术插图,科学解释,信息图表,医学插图,儿童电子书,卡通漫画等等。用户研究展示了我们的工具的易用性、应用的多样性、艺术表现力和创造性的可能性。
{"title":"Kitty: sketching dynamic and interactive illustrations","authors":"Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, G. Fitzmaurice","doi":"10.1145/2642918.2647375","DOIUrl":"https://doi.org/10.1145/2642918.2647375","url":null,"abstract":"We present Kitty, a sketch-based tool for authoring dynamic and interactive illustrations. Artists can sketch animated drawings and textures to convey the living phenomena, and specify the functional relationship between its entities to characterize the dynamic behavior of systems and environments. An underlying graph model, customizable through sketching, captures the functional relationships between the visual, spatial, temporal or quantitative parameters of its entities. As the viewer interacts with the resulting dynamic interactive illustration, the parameters of the drawing change accordingly, depicting the dynamics and chain of causal effects within a scene. The generality of this framework makes our tool applicable for a variety of purposes, including technical illustrations, scientific explanation, infographics, medical illustrations, children's e-books, cartoon strips and beyond. A user study demonstrates the ease of usage, variety of applications, artistic expressiveness and creative possibilities of our tool.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80483542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
Loupe: a handheld near-eye display Loupe:手持式近眼显示器
Kent Lyons, S. Kim, Shigeyuki Seko, David H. Nguyen, Audrey Desjardins, Mélodie Vidal, D. Dobbelstein, Jeremy Rubin
Loupe is a novel interactive device with a near-eye virtual display similar to head-up display glasses that retains a handheld form factor. We present our hardware implementation and discuss our user interface that leverages Loupe's unique combination of properties. In particular, we present our input capabilities, spatial metaphor, opportunities for using the round aspect of Loupe, and our use of focal depth. We demonstrate how those capabilities come together in an example application designed to allow quick access to information feeds.
Loupe是一种新型的交互式设备,具有近眼虚拟显示器,类似于平视显示眼镜,但保留了手持形式。我们展示了我们的硬件实现,并讨论了利用Loupe独特的属性组合的用户界面。特别是,我们展示了我们的输入能力,空间隐喻,使用Loupe的圆形方面的机会,以及我们对焦点深度的使用。我们将在一个示例应用程序中演示这些功能是如何结合在一起的,该示例应用程序旨在允许快速访问信息提要。
{"title":"Loupe: a handheld near-eye display","authors":"Kent Lyons, S. Kim, Shigeyuki Seko, David H. Nguyen, Audrey Desjardins, Mélodie Vidal, D. Dobbelstein, Jeremy Rubin","doi":"10.1145/2642918.2647361","DOIUrl":"https://doi.org/10.1145/2642918.2647361","url":null,"abstract":"Loupe is a novel interactive device with a near-eye virtual display similar to head-up display glasses that retains a handheld form factor. We present our hardware implementation and discuss our user interface that leverages Loupe's unique combination of properties. In particular, we present our input capabilities, spatial metaphor, opportunities for using the round aspect of Loupe, and our use of focal depth. We demonstrate how those capabilities come together in an example application designed to allow quick access to information feeds.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77751143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Proceedings of the 27th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1