首页 > 最新文献

Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

英文 中文
Graffiti fur: turning your carpet into a computer display 涂鸦皮毛:把你的地毯变成电脑显示屏
Yuta Sugiura, Koki Toda, T. Hoshi, Youichi Kamiyama, T. Igarashi, M. Inami
We devised a display technology that utilizes the phenomenon whereby the shading properties of fur change as the fibers are raised or flattened. One can erase drawings by first flattening the fibers by sweeping the surface by hand in the fiber's growth direction, and then draw lines by raising the fibers by moving the finger in the opposite direction. These material properties can be found in various items such as carpets in our living environments. We have developed three different devices to draw patterns on a "fur display" utilizing this phenomenon: a roller device, a pen device and pressure projection device. Our technology can turn ordinary objects in our environment into rewritable displays without requiring or creating any non-reversible modifications to them. In addition, it can be used to present large-scale image without glare, and the images it creates require no running costs to maintain.
我们设计了一种显示技术,利用这种现象,皮毛的遮阳特性随着纤维的升高或变平而变化。首先用手沿着纤维生长的方向扫平纤维,然后用手指向相反的方向扬起纤维,画出线条,就可以擦掉图画。这些材料的特性可以在我们生活环境中的各种物品中找到,比如地毯。我们已经开发了三种不同的设备,利用这种现象在“毛皮显示器”上绘制图案:一个滚轮装置,一个笔装置和一个压力投影装置。我们的技术可以将我们环境中的普通物体变成可重写的显示器,而不需要对它们进行任何不可逆转的修改。此外,它可以用来呈现无眩光的大规模图像,并且它创建的图像不需要运行成本来维护。
{"title":"Graffiti fur: turning your carpet into a computer display","authors":"Yuta Sugiura, Koki Toda, T. Hoshi, Youichi Kamiyama, T. Igarashi, M. Inami","doi":"10.1145/2642918.2647370","DOIUrl":"https://doi.org/10.1145/2642918.2647370","url":null,"abstract":"We devised a display technology that utilizes the phenomenon whereby the shading properties of fur change as the fibers are raised or flattened. One can erase drawings by first flattening the fibers by sweeping the surface by hand in the fiber's growth direction, and then draw lines by raising the fibers by moving the finger in the opposite direction. These material properties can be found in various items such as carpets in our living environments. We have developed three different devices to draw patterns on a \"fur display\" utilizing this phenomenon: a roller device, a pen device and pressure projection device. Our technology can turn ordinary objects in our environment into rewritable displays without requiring or creating any non-reversible modifications to them. In addition, it can be used to present large-scale image without glare, and the images it creates require no running costs to maintain.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"22 6S 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76517993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
In-air gestures around unmodified mobile devices 在未修改的移动设备周围的空中手势
Jie Song, Gábor Sörös, Fabrizio Pece, S. Fanello, S. Izadi, Cem Keskin, Otmar Hilliges
We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.
我们提出了一种新的基于机器学习的算法,扩展了移动设备周围的交互空间。这项技术只使用了现在市面上常见的RGB相机。我们的算法稳健地识别各种空中手势,支持用户变化和不同的照明条件。我们证明了我们的算法可以在未经修改的移动设备上实时运行,包括资源受限的智能手机和智能手表。我们的目标不是取代触摸屏作为主要的输入设备,而是使用手势来增强和丰富现有的交互词汇表。虽然触摸输入在许多情况下都能很好地工作,但我们展示了许多交互任务,如模式切换、应用程序和任务管理、菜单选择和某些类型的导航,这些输入可以通过空中手势来补充或更好地服务。这消除了小型触摸屏的屏幕空间问题,并允许将输入扩展到设备周围的3D空间。我们给出了识别精度(93%测试和98%训练)、内存占用影响和其他模型参数的结果。最后,我们报告了初步用户评估的结果,讨论了优势和局限性,并对未来的工作方向进行了总结。
{"title":"In-air gestures around unmodified mobile devices","authors":"Jie Song, Gábor Sörös, Fabrizio Pece, S. Fanello, S. Izadi, Cem Keskin, Otmar Hilliges","doi":"10.1145/2642918.2647373","DOIUrl":"https://doi.org/10.1145/2642918.2647373","url":null,"abstract":"We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"160 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73407728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 175
Expert crowdsourcing with flash teams 与flash团队的专家众包
Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa A. Valentine, Michael S. Bernstein
We introduce flash teams, a framework for dynamically assembling and managing paid experts from the crowd. Flash teams advance a vision of expert crowd work that accomplishes complex, interdependent goals such as engineering and design. These teams consist of sequences of linked modular tasks and handoffs that can be computationally managed. Interactive systems reason about and manipulate these teams' structures: for example, flash teams can be recombined to form larger organizations and authored automatically in response to a user's request. Flash teams can also hire more people elastically in reaction to task needs, and pipeline intermediate output to accelerate completion times. To enable flash teams, we present Foundry, an end-user authoring platform and runtime manager. Foundry allows users to author modular tasks, then manages teams through handoffs of intermediate work. We demonstrate that Foundry and flash teams enable crowdsourcing of a broad class of goals including design prototyping, course development, and film animation, in half the work time of traditional self-managed teams.
我们引入了flash团队,这是一个从人群中动态聚集和管理付费专家的框架。Flash团队提出了一种愿景,即专家群体工作可以完成复杂的、相互依赖的目标,比如工程和设计。这些团队由一系列相连的模块化任务和交接组成,这些任务和交接可以通过计算进行管理。交互系统推断并操纵这些团队的结构:例如,flash团队可以重组成更大的组织,并根据用户的请求自动编写。Flash团队还可以根据任务需求灵活地雇佣更多的人,并流水线输出中间产品以加快完成时间。为了支持flash团队,我们推出了Foundry,一个终端用户创作平台和运行时管理器。Foundry允许用户编写模块化任务,然后通过中间工作的交接来管理团队。我们证明了Foundry和flash团队能够在传统的自我管理团队一半的工作时间内实现广泛的目标,包括设计原型,课程开发和电影动画。
{"title":"Expert crowdsourcing with flash teams","authors":"Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa A. Valentine, Michael S. Bernstein","doi":"10.1145/2642918.2647409","DOIUrl":"https://doi.org/10.1145/2642918.2647409","url":null,"abstract":"We introduce flash teams, a framework for dynamically assembling and managing paid experts from the crowd. Flash teams advance a vision of expert crowd work that accomplishes complex, interdependent goals such as engineering and design. These teams consist of sequences of linked modular tasks and handoffs that can be computationally managed. Interactive systems reason about and manipulate these teams' structures: for example, flash teams can be recombined to form larger organizations and authored automatically in response to a user's request. Flash teams can also hire more people elastically in reaction to task needs, and pipeline intermediate output to accelerate completion times. To enable flash teams, we present Foundry, an end-user authoring platform and runtime manager. Foundry allows users to author modular tasks, then manages teams through handoffs of intermediate work. We demonstrate that Foundry and flash teams enable crowdsourcing of a broad class of goals including design prototyping, course development, and film animation, in half the work time of traditional self-managed teams.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79236465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 222
FlexSense: a transparent self-sensing deformable surface FlexSense:一种透明的自传感可变形表面
Christian Rendl, David Kim, S. Fanello, Patrick Parzer, Christoph Rhemann, Jonathan Taylor, M. Zirkl, G. Scheipl, T. Rothländer, M. Haller, S. Izadi
We present FlexSense, a new thin-film, transparent sensing surface based on printed piezoelectric sensors, which can reconstruct complex deformations without the need for any external sensing, such as cameras. FlexSense provides a fully self-contained setup which improves mobility and is not affected from occlusions. Using only a sparse set of sensors, printed on the periphery of the surface substrate, we devise two new algorithms to fully reconstruct the complex deformations of the sheet, using only these sparse sensor measurements. An evaluation shows that both proposed algorithms are capable of reconstructing complex deformations accurately. We demonstrate how FlexSense can be used for a variety of 2.5D interactions, including as a transparent cover for tablets where bending can be performed alongside touch to enable magic lens style effects, layered input, and mode switching, as well as the ability to use our device as a high degree-of-freedom input controller for gaming and beyond.
我们提出了FlexSense,一种基于印刷压电传感器的新型薄膜透明传感表面,它可以重建复杂的变形,而不需要任何外部传感,如相机。FlexSense提供了一个完全独立的设置,提高了机动性,不受咬合的影响。仅使用一组稀疏的传感器,印刷在表面衬底的外围,我们设计了两种新的算法来完全重建片的复杂变形,仅使用这些稀疏传感器测量值。实验结果表明,两种算法都能准确地重建复杂变形。我们展示了FlexSense如何用于各种2.5D交互,包括作为平板电脑的透明盖板,弯曲可以在触摸的同时进行,以实现神奇的镜头风格效果,分层输入和模式切换,以及使用我们的设备作为游戏和其他领域的高自由度输入控制器的能力。
{"title":"FlexSense: a transparent self-sensing deformable surface","authors":"Christian Rendl, David Kim, S. Fanello, Patrick Parzer, Christoph Rhemann, Jonathan Taylor, M. Zirkl, G. Scheipl, T. Rothländer, M. Haller, S. Izadi","doi":"10.1145/2642918.2647405","DOIUrl":"https://doi.org/10.1145/2642918.2647405","url":null,"abstract":"We present FlexSense, a new thin-film, transparent sensing surface based on printed piezoelectric sensors, which can reconstruct complex deformations without the need for any external sensing, such as cameras. FlexSense provides a fully self-contained setup which improves mobility and is not affected from occlusions. Using only a sparse set of sensors, printed on the periphery of the surface substrate, we devise two new algorithms to fully reconstruct the complex deformations of the sheet, using only these sparse sensor measurements. An evaluation shows that both proposed algorithms are capable of reconstructing complex deformations accurately. We demonstrate how FlexSense can be used for a variety of 2.5D interactions, including as a transparent cover for tablets where bending can be performed alongside touch to enable magic lens style effects, layered input, and mode switching, as well as the ability to use our device as a high degree-of-freedom input controller for gaming and beyond.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80709979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Skin buttons: cheap, small, low-powered and clickable fixed-icon laser projectors 皮肤按钮:便宜,小,低功率,可点击的固定图标激光投影仪
Gierad Laput, R. Xiao, Xiang 'Anthony' Chen, S. Hudson, Chris Harrison
Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smartwatch to render icons on the user's skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these 'skin buttons' can have high touch accuracy and recognizability, while being low cost and power-efficient.
智能手表是一个很有前途的新型互动平台,但它的体积太小,甚至连基本的操作都很麻烦。因此,我们非常需要一种方法来扩展智能手表的交互范围,让人类的输入摆脱设备狭小的物理限制。我们建议使用集成在智能手表中的微型投影仪在用户皮肤上呈现图标。这些图标可以是触敏的,在不增加设备尺寸的情况下显著扩大了交互区域。通过一系列实验,我们证明这些“皮肤按钮”具有较高的触摸精度和可识别性,同时成本低,节能。
{"title":"Skin buttons: cheap, small, low-powered and clickable fixed-icon laser projectors","authors":"Gierad Laput, R. Xiao, Xiang 'Anthony' Chen, S. Hudson, Chris Harrison","doi":"10.1145/2642918.2647356","DOIUrl":"https://doi.org/10.1145/2642918.2647356","url":null,"abstract":"Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smartwatch to render icons on the user's skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these 'skin buttons' can have high touch accuracy and recognizability, while being low cost and power-efficient.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75110664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 119
WirePrint: 3D printed previews for fast prototyping WirePrint: 3D打印预览快速原型
Stefanie Müller, Sangha Im, Serafima Gurevich, Alexander Teibrich, Lisa Pfisterer, François Guimbretière, Patrick Baudisch
Even though considered a rapid prototyping tool, 3D printing is so slow that a reasonably sized object requires printing overnight. This slows designers down to a single iteration per day. In this paper, we propose to instead print low-fidelity wireframe previews in the early stages of the design process. Wireframe previews are 3D prints in which surfaces have been replaced with a wireframe mesh. Since wireframe previews are to scale and represent the overall shape of the 3D object, they allow users to quickly verify key aspects of their 3D design, such as the ergonomic fit. To maximize the speed-up, we instruct 3D printers to extrude filament not layer-by-layer, but directly in 3D-space, allowing them to create the edges of the wireframe model directly one stroke at a time. This allows us to achieve speed-ups of up to a factor of 10 compared to traditional layer-based printing. We demonstrate how to achieve wireframe previews on standard FDM 3D printers, such as the PrintrBot or the Kossel mini. Users only need to install the WirePrint software, making our approach applicable to many 3D printers already in use today. Finally, wireframe previews use only a fraction of material required for a regular print, making it even more affordable to iterate.
尽管3D打印被认为是一种快速的原型制作工具,但它的速度很慢,一个合理大小的物体需要连夜打印。这使得设计师每天只能进行一次迭代。在本文中,我们建议在设计过程的早期阶段打印低保真线框预览。线框预览是3D打印,其中的表面已被替换为线框网格。由于线框预览是缩放和表示3D对象的整体形状,他们允许用户快速验证他们的3D设计的关键方面,如符合人体工程学的配合。为了最大限度地提高速度,我们指示3D打印机不是逐层挤出长丝,而是直接在3D空间中挤出长丝,允许它们一次直接一笔创建线框模型的边缘。与传统的基于层的打印相比,这使我们能够实现高达10倍的加速。我们演示了如何在标准FDM 3D打印机(如PrintrBot或Kossel mini)上实现线框预览。用户只需要安装WirePrint软件,使我们的方法适用于今天已经在使用的许多3D打印机。最后,线框预览只使用常规打印所需材料的一小部分,使其更容易迭代。
{"title":"WirePrint: 3D printed previews for fast prototyping","authors":"Stefanie Müller, Sangha Im, Serafima Gurevich, Alexander Teibrich, Lisa Pfisterer, François Guimbretière, Patrick Baudisch","doi":"10.1145/2642918.2647359","DOIUrl":"https://doi.org/10.1145/2642918.2647359","url":null,"abstract":"Even though considered a rapid prototyping tool, 3D printing is so slow that a reasonably sized object requires printing overnight. This slows designers down to a single iteration per day. In this paper, we propose to instead print low-fidelity wireframe previews in the early stages of the design process. Wireframe previews are 3D prints in which surfaces have been replaced with a wireframe mesh. Since wireframe previews are to scale and represent the overall shape of the 3D object, they allow users to quickly verify key aspects of their 3D design, such as the ergonomic fit. To maximize the speed-up, we instruct 3D printers to extrude filament not layer-by-layer, but directly in 3D-space, allowing them to create the edges of the wireframe model directly one stroke at a time. This allows us to achieve speed-ups of up to a factor of 10 compared to traditional layer-based printing. We demonstrate how to achieve wireframe previews on standard FDM 3D printers, such as the PrintrBot or the Kossel mini. Users only need to install the WirePrint software, making our approach applicable to many 3D printers already in use today. Finally, wireframe previews use only a fraction of material required for a regular print, making it even more affordable to iterate.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74118084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 206
Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration 物理远程呈现:形状捕获和显示为具体的,计算机中介远程协作
Daniel Leithinger, Sean Follmer, A. Olwal, Hiroshi Ishii
We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.
我们提出了一种基于共享工作空间的物理远程呈现新方法,该方法具有捕获和远程呈现人和物体形状的能力。在本文中,我们描述了形状传输的概念,并提出了交互技术来操纵远程物理对象和共享数字内容的物理渲染。我们研究了如何改变用户身体部位的表现,以增强他们的远程操作能力。我们还描述了基于形状显示构建和测试原型物理网真工作区的细节。初步评估显示了用户如何能够操纵远程对象,我们报告了我们对几种不同操作技术的观察,这些技术突出了我们系统的表达性。
{"title":"Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration","authors":"Daniel Leithinger, Sean Follmer, A. Olwal, Hiroshi Ishii","doi":"10.1145/2642918.2647377","DOIUrl":"https://doi.org/10.1145/2642918.2647377","url":null,"abstract":"We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86320685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Generating emotionally relevant musical scores for audio stories 为音频故事生成情感相关的乐谱
Steve Rubin, Maneesh Agrawala
Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.
高制作的音频故事通常包括反映演讲情绪的乐谱。然而,制作有效的乐谱需要深厚的声音制作专业知识,即使对专家来说也是费时的。我们提出了一种系统和算法,用于重新排序音乐轨道,以生成与音频故事情感相关的乐谱。用户提供语音音轨和音乐音轨,我们的系统通过手工标记、众包和自动方法收集语音上的情感标签。我们开发了一种基于约束的动态规划算法,该算法使用这些情感标签来生成与情感相关的乐谱。我们通过为音频故事生成20个乐谱来证明我们算法的有效性,并表明人群工作者对其整体质量的评价明显高于没有音乐的故事。
{"title":"Generating emotionally relevant musical scores for audio stories","authors":"Steve Rubin, Maneesh Agrawala","doi":"10.1145/2642918.2647406","DOIUrl":"https://doi.org/10.1145/2642918.2647406","url":null,"abstract":"Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85311265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Video lens: rapid playback and exploration of large video collections and associated metadata 视频镜头:快速回放和探索大型视频集合和相关元数据
Justin Matejka, Tovi Grossman, G. Fitzmaurice
We present Video Lens, a framework which allows users to visualize and interactively explore large collections of videos and associated metadata. The primary goal of the framework is to let users quickly find relevant sections within the videos and play them back in rapid succession. The individual UI elements are linked and highly interactive, supporting a faceted search paradigm and encouraging exploration of the data set. We demonstrate the capabilities and specific scenarios of Video Lens within the domain of professional baseball videos. A user study with 12 participants indicates that Video Lens efficiently supports a diverse range of powerful yet desirable video query tasks, while a series of interviews with professionals in the field demonstrates the framework's benefits and future potential.
我们提出Video Lens,一个允许用户可视化和交互式地探索大量视频和相关元数据集合的框架。该框架的主要目标是让用户在视频中快速找到相关部分,并快速连续播放。各个UI元素相互关联,具有高度的交互性,支持分面搜索范式,并鼓励对数据集的探索。我们演示了Video Lens在职业棒球视频领域的功能和具体场景。一项包含12名参与者的用户研究表明,Video Lens有效地支持了各种强大而理想的视频查询任务,而对该领域专业人士的一系列采访则展示了该框架的优势和未来潜力。
{"title":"Video lens: rapid playback and exploration of large video collections and associated metadata","authors":"Justin Matejka, Tovi Grossman, G. Fitzmaurice","doi":"10.1145/2642918.2647366","DOIUrl":"https://doi.org/10.1145/2642918.2647366","url":null,"abstract":"We present Video Lens, a framework which allows users to visualize and interactively explore large collections of videos and associated metadata. The primary goal of the framework is to let users quickly find relevant sections within the videos and play them back in rapid succession. The individual UI elements are linked and highly interactive, supporting a faceted search paradigm and encouraging exploration of the data set. We demonstrate the capabilities and specific scenarios of Video Lens within the domain of professional baseball videos. A user study with 12 participants indicates that Video Lens efficiently supports a diverse range of powerful yet desirable video query tasks, while a series of interviews with professionals in the field demonstrates the framework's benefits and future potential.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91233751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Deconstructing and restyling D3 visualizations 解构和重新设计D3可视化
Jonathan Harper, Maneesh Agrawala
The D3 JavaScript library has become a ubiquitous tool for developing visualizations on the Web. Yet, once a D3 visualization is published online its visual style is difficult to change. We present a pair of tools for deconstructing and restyling existing D3 visualizations. Our deconstruction tool analyzes a D3 visualization to extract the data, the marks and the mappings between them. Our restyling tool lets users modify the visual attributes of the marks as well as the mappings from the data to these attributes. Together our tools allow users to easily modify D3 visualizations without examining the underlying code and we show how they can be used to deconstruct and restyle a variety of D3 visualizations.
D3 JavaScript库已经成为在Web上开发可视化的普遍工具。然而,一旦D3可视化发布到网上,它的视觉风格就很难改变了。我们提供了一对工具来解构和重新设计现有的D3可视化。我们的解构工具分析D3可视化以提取数据、标记和它们之间的映射。我们的重新设计工具允许用户修改标记的视觉属性以及从数据到这些属性的映射。我们的工具允许用户在不检查底层代码的情况下轻松修改D3可视化,我们展示了如何使用它们来解构和重新设计各种D3可视化。
{"title":"Deconstructing and restyling D3 visualizations","authors":"Jonathan Harper, Maneesh Agrawala","doi":"10.1145/2642918.2647411","DOIUrl":"https://doi.org/10.1145/2642918.2647411","url":null,"abstract":"The D3 JavaScript library has become a ubiquitous tool for developing visualizations on the Web. Yet, once a D3 visualization is published online its visual style is difficult to change. We present a pair of tools for deconstructing and restyling existing D3 visualizations. Our deconstruction tool analyzes a D3 visualization to extract the data, the marks and the mappings between them. Our restyling tool lets users modify the visual attributes of the marks as well as the mappings from the data to these attributes. Together our tools allow users to easily modify D3 visualizations without examining the underlying code and we show how they can be used to deconstruct and restyle a variety of D3 visualizations.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81125148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
期刊
Proceedings of the 27th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1