首页 > 最新文献

Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology最新文献

英文 中文
On Sounder Ground: CAAT, a Viable Widget for Affective Reaction Assessment 在更坚实的基础上:CAAT,一个可行的情感反应评估工具
Bruno Cardoso, Osvaldo Santos, T. Romão
The reliable assessment of affective reactions to stimuli is paramount in a variety of scientific fields, including HCI (Human-Computer Interaction). Variation of emotional states over time, however, warrants the need for quick measurements of emotions. To address it, new tools for quick assessments of affective states have been developed. In this work, we explore the CAAT (Circumplex Affective Assessment Tool), an instrument with a unique design in the scope of affect assessment -- a graphical control element -- that makes it amenable to seamless integration in user interfaces. We briefly describe the CAAT and present a multi-dimensional evaluation that evidences the tool's viability. We have assessed its test-retest reliability, construct validity and quickness of use, by collecting data through an unsupervised, web-based user study. Results show high test-retest reliability, evidence the tool's construct validity and confirm its quickness of use, making it a good fit for longitudinal studies and systems requiring quick assessments of emotional reactions.
对刺激的情感反应的可靠评估在各种科学领域都是至关重要的,包括HCI(人机交互)。然而,随着时间的推移,情绪状态的变化证明了快速测量情绪的必要性。为了解决这个问题,人们开发了快速评估情感状态的新工具。在这项工作中,我们探索了CAAT(环效情感评估工具),这是一种在情感评估范围内具有独特设计的工具-图形控制元素-使其能够在用户界面中无缝集成。我们简要地描述了CAAT,并提出了一个多维评估,以证明该工具的可行性。我们通过一项无监督的、基于网络的用户研究收集数据,评估了它的重测信度、结构效度和使用速度。结果显示高的重测信度,证明了该工具的结构效度和使用的快速性,使其非常适合纵向研究和需要快速评估情绪反应的系统。
{"title":"On Sounder Ground: CAAT, a Viable Widget for Affective Reaction Assessment","authors":"Bruno Cardoso, Osvaldo Santos, T. Romão","doi":"10.1145/2807442.2807465","DOIUrl":"https://doi.org/10.1145/2807442.2807465","url":null,"abstract":"The reliable assessment of affective reactions to stimuli is paramount in a variety of scientific fields, including HCI (Human-Computer Interaction). Variation of emotional states over time, however, warrants the need for quick measurements of emotions. To address it, new tools for quick assessments of affective states have been developed. In this work, we explore the CAAT (Circumplex Affective Assessment Tool), an instrument with a unique design in the scope of affect assessment -- a graphical control element -- that makes it amenable to seamless integration in user interfaces. We briefly describe the CAAT and present a multi-dimensional evaluation that evidences the tool's viability. We have assessed its test-retest reliability, construct validity and quickness of use, by collecting data through an unsupervised, web-based user study. Results show high test-retest reliability, evidence the tool's construct validity and confirm its quickness of use, making it a good fit for longitudinal studies and systems requiring quick assessments of emotional reactions.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122722246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel GelTouch:通过薄的可编程凝胶进行局部触觉反馈
Viktor Miruchna, Robert Walter, David Lindlbauer, Maren Lehmann, R. Klitzing, Jörg Müller
We present GelTouch, a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32 C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feedback through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%).
我们提出GelTouch,一种基于凝胶的层,可以选择性地在柔软和坚硬之间转换,以提供触觉多点触摸反馈。在不激活的情况下,它是灵活的,透明的,并且不包含机械,电磁或液压元件,因此具有紧凑的外形(我们的原型是一个2mm薄的触摸屏层)。激活区域可以自由和连续地变形,而不限于固定的、预定义的形状。GelTouch由聚(n -异丙基丙烯酰胺)凝胶层组成,当加热(>32℃)激活时,凝胶层会改变其粘弹性。我们提出了三种不同的激活技术:1)氧化铟锡(ITO)作为加热元件,通过单个可寻址的taxels实现触觉反馈;2)预定义的ITO雕刻触觉区域,可以分层和组合;3)电阻丝的复杂排列,产生薄的触觉边缘。我们展示了一款具有6x4触觉区域的平板电脑,支持触觉键盘、滑块和拇指杆。我们表明,凝胶在激活时硬度高达25倍,并且用户可靠地检测到触觉特征(94.8%)。
{"title":"GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel","authors":"Viktor Miruchna, Robert Walter, David Lindlbauer, Maren Lehmann, R. Klitzing, Jörg Müller","doi":"10.1145/2807442.2807487","DOIUrl":"https://doi.org/10.1145/2807442.2807487","url":null,"abstract":"We present GelTouch, a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32 C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feedback through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%).","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132843360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
FoveAR: Combining an Optically See-Through Near-Eye Display with Projector-Based Spatial Augmented Reality FoveAR:结合光学透视近眼显示和基于投影仪的空间增强现实
Hrvoje Benko, E. Ofek, Feng Zheng, Andrew D. Wilson
Optically see-through (OST) augmented reality glasses can overlay spatially-registered computer-generated content onto the real world. However, current optical designs and weight considerations limit their diagonal field of view to less than 40 degrees, making it difficult to create a sense of immersion or give the viewer an overview of the augmented reality space. We combine OST glasses with a projection-based spatial augmented reality display to achieve a novel display hybrid, called FoveAR, capable of greater than 100 degrees field of view, view dependent graphics, extended brightness and color, as well as interesting combinations of public and personal data display. We contribute details of our prototype implementation and an analysis of the interactive design space that our system enables. We also contribute four prototype experiences showcasing the capabilities of FoveAR as well as preliminary user feedback providing insights for enhancing future FoveAR experiences.
光学透视(OST)增强现实眼镜可以将空间注册的计算机生成的内容覆盖到现实世界中。然而,目前的光学设计和重量考虑限制了它们的对角线视野小于40度,这使得它很难创造一种沉浸感或给观众一个增强现实空间的概述。我们将OST眼镜与基于投影的空间增强现实显示器相结合,实现了一种名为FoveAR的新型混合显示,能够实现超过100度的视野,视图依赖图形,扩展亮度和颜色,以及公共和个人数据显示的有趣组合。我们提供了原型实现的细节,以及对系统支持的交互设计空间的分析。我们还提供了四个原型体验,展示了FoveAR的功能,以及初步的用户反馈,为增强未来的FoveAR体验提供了见解。
{"title":"FoveAR: Combining an Optically See-Through Near-Eye Display with Projector-Based Spatial Augmented Reality","authors":"Hrvoje Benko, E. Ofek, Feng Zheng, Andrew D. Wilson","doi":"10.1145/2807442.2807493","DOIUrl":"https://doi.org/10.1145/2807442.2807493","url":null,"abstract":"Optically see-through (OST) augmented reality glasses can overlay spatially-registered computer-generated content onto the real world. However, current optical designs and weight considerations limit their diagonal field of view to less than 40 degrees, making it difficult to create a sense of immersion or give the viewer an overview of the augmented reality space. We combine OST glasses with a projection-based spatial augmented reality display to achieve a novel display hybrid, called FoveAR, capable of greater than 100 degrees field of view, view dependent graphics, extended brightness and color, as well as interesting combinations of public and personal data display. We contribute details of our prototype implementation and an analysis of the interactive design space that our system enables. We also contribute four prototype experiences showcasing the capabilities of FoveAR as well as preliminary user feedback providing insights for enhancing future FoveAR experiences.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133761591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Corona: Positioning Adjacent Device with Asymmetric Bluetooth Low Energy RSSI Distributions 电晕:用不对称蓝牙低能量RSSI分布定位相邻设备
Haojian Jin, Cheng Xu, Kent Lyons
We introduce Corona, a novel spatial sensing technique that implicitly locates adjacent mobile devices in the same plane by examining asymmetric Bluetooth Low Energy RSSI distributions. The underlying phenomenon is that the off-center BLE antenna and asymmetric radio frequency topology create a characteristic Bluetooth RSSI distribution around the device. By comparing the real-time RSSI readings against a RSSI distribution model, each device can derive the relative position of the other adjacent device. Our experiments using an iPhone and iPad Mini show that Corona yields position estimation at 50% accuracy within a 2cm range, or 85% for the best two candidates. We developed an application to combine Corona with accelerometer readings to mitigate ambiguity and enable cross-device interactions on adjacent devices.
我们介绍了Corona,这是一种新的空间传感技术,通过检测不对称蓝牙低能量RSSI分布,隐式地定位相邻移动设备在同一平面上。潜在的现象是,偏离中心的BLE天线和不对称的射频拓扑在设备周围创建了一个特征的蓝牙RSSI分布。通过将实时RSSI读数与RSSI分布模型进行比较,每个设备可以推导出相邻设备的相对位置。我们使用iPhone和iPad Mini进行的实验表明,Corona在2厘米范围内的位置估计准确率为50%,对于最佳的两个候选位置估计准确率为85%。我们开发了一款应用程序,将Corona与加速度计读数结合起来,以减轻歧义,并在相邻设备上实现跨设备交互。
{"title":"Corona: Positioning Adjacent Device with Asymmetric Bluetooth Low Energy RSSI Distributions","authors":"Haojian Jin, Cheng Xu, Kent Lyons","doi":"10.1145/2807442.2807485","DOIUrl":"https://doi.org/10.1145/2807442.2807485","url":null,"abstract":"We introduce Corona, a novel spatial sensing technique that implicitly locates adjacent mobile devices in the same plane by examining asymmetric Bluetooth Low Energy RSSI distributions. The underlying phenomenon is that the off-center BLE antenna and asymmetric radio frequency topology create a characteristic Bluetooth RSSI distribution around the device. By comparing the real-time RSSI readings against a RSSI distribution model, each device can derive the relative position of the other adjacent device. Our experiments using an iPhone and iPad Mini show that Corona yields position estimation at 50% accuracy within a 2cm range, or 85% for the best two candidates. We developed an application to combine Corona with accelerometer readings to mitigate ambiguity and enable cross-device interactions on adjacent devices.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126523767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Improving Haptic Feedback on Wearable Devices through Accelerometer Measurements 通过加速度计测量改善可穿戴设备的触觉反馈
Jeffrey R. Blum, I. Frissen, J. Cooperstock
Many variables have been shown to impact whether a vibration stimulus will be perceived. We present a user study that takes into account not only previously investigated predictors such as vibration intensity and duration along with the age of the person receiving the stimulus, but also the amount of motion, as measured by an accelerometer, at the site of vibration immediately preceding the stimulus. This is a more specific measure than in previous studies showing an effect on perception due to gross conditions such as walking. We show that a logistic regression model including prior acceleration is significantly better at predicting vibration perception than a model including only vibration intensity, duration and participant age. In addition to the overall regression, we discuss individual participant differences and measures of classification performance for real-world applications. Our expectation is that haptic interface designers will be able to use such results to design better vibrations that are perceivable under the user's current activity conditions, without being annoyingly loud or jarring, eventually approaching ``perceptually equivalent' feedback independent of motion.
许多变量已经被证明会影响振动刺激是否会被感知。我们提出了一项用户研究,该研究不仅考虑了先前研究的预测因素,如振动强度和持续时间以及接受刺激的人的年龄,还考虑了在刺激之前立即在振动部位由加速度计测量的运动量。与之前的研究相比,这是一个更具体的测量方法,显示了行走等恶劣条件对感知的影响。我们发现,包含先验加速度的逻辑回归模型在预测振动感知方面明显优于仅包含振动强度、持续时间和参与者年龄的模型。除了整体回归之外,我们还讨论了实际应用程序中个体参与者的差异和分类性能的度量。我们的期望是,触觉界面设计师将能够利用这些结果来设计更好的振动,在用户当前的活动条件下可以感知,而不会令人讨厌的大声或不和谐,最终接近独立于运动的“感知等效”反馈。
{"title":"Improving Haptic Feedback on Wearable Devices through Accelerometer Measurements","authors":"Jeffrey R. Blum, I. Frissen, J. Cooperstock","doi":"10.1145/2807442.2807474","DOIUrl":"https://doi.org/10.1145/2807442.2807474","url":null,"abstract":"Many variables have been shown to impact whether a vibration stimulus will be perceived. We present a user study that takes into account not only previously investigated predictors such as vibration intensity and duration along with the age of the person receiving the stimulus, but also the amount of motion, as measured by an accelerometer, at the site of vibration immediately preceding the stimulus. This is a more specific measure than in previous studies showing an effect on perception due to gross conditions such as walking. We show that a logistic regression model including prior acceleration is significantly better at predicting vibration perception than a model including only vibration intensity, duration and participant age. In addition to the overall regression, we discuss individual participant differences and measures of classification performance for real-world applications. Our expectation is that haptic interface designers will be able to use such results to design better vibrations that are perceivable under the user's current activity conditions, without being annoyingly loud or jarring, eventually approaching ``perceptually equivalent' feedback independent of motion.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126121204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
LaserStacker: Fabricating 3D Objects by Laser Cutting and Welding 激光堆垛机:通过激光切割和焊接制造3D物体
Udayan Umapathi, Hsiang-Ting Chen, Stefanie Müller, L. Wall, Anna Seufert, Patrick Baudisch
Laser cutters are useful for rapid prototyping because they are fast. However, they only produce planar 2D geometry. One approach to creating non-planar objects is to cut the object in horizontal slices and to stack and glue them. This approach, however, requires manual effort for the assembly and time for the glue to set, defeating the purpose of using a fast fabrication tool. We propose eliminating the assembly step with our system LaserStacker. The key idea is to use the laser cutter to not only cut but also to weld. Users place not one acrylic sheet, but a stack of acrylic sheets into their cutter. In a single process, LaserStacker cuts each individual layer to shape (through all layers above it), welds layers by melting material at their interface, and heals undesired cuts in higher layers. When users take out the object from the laser cutter, it is already assembled. To allow users to model stacked objects efficiently, we built an extension to a commercial 3D editor (SketchUp) that provides tools for defining which parts should be connected and which remain loose. When users hit the export button, LaserStacker converts the 3D model into cutting, welding, and healing instructions for the laser cutter. We show how LaserStacker does not only allow making static objects, such as architectural models, but also objects with moving parts and simple mechanisms, such as scissors, a simple pinball machine, and a mechanical toy with gears.
激光切割机对快速成型很有用,因为它们速度快。然而,它们只能产生平面二维几何。创建非平面对象的一种方法是将对象切割成水平切片,然后堆叠和粘合它们。然而,这种方法需要手工组装和胶水设置的时间,违背了使用快速制造工具的目的。我们建议用我们的系统LaserStacker消除组装步骤。关键思想是使用激光切割机不仅切割而且焊接。用户将不是一块亚克力板,而是一堆亚克力板放入切割机。在一个单一的过程中,LaserStacker将每一层切割成形状(通过它上面的所有层),通过在界面熔化材料来焊接层,并修复更高层的不需要的切口。当用户从激光切割机中取出物体时,它已经组装好了。为了允许用户有效地对堆叠对象进行建模,我们构建了一个商业3D编辑器(SketchUp)的扩展,该扩展提供了定义哪些部件应该连接,哪些部件保持松散的工具。当用户点击导出按钮时,LaserStacker将3D模型转换为激光切割机的切割,焊接和愈合指令。我们展示了LaserStacker不仅允许制作静态对象,如建筑模型,而且还允许制作具有运动部件和简单机制的对象,如剪刀,简单的弹球机和带有齿轮的机械玩具。
{"title":"LaserStacker: Fabricating 3D Objects by Laser Cutting and Welding","authors":"Udayan Umapathi, Hsiang-Ting Chen, Stefanie Müller, L. Wall, Anna Seufert, Patrick Baudisch","doi":"10.1145/2807442.2807512","DOIUrl":"https://doi.org/10.1145/2807442.2807512","url":null,"abstract":"Laser cutters are useful for rapid prototyping because they are fast. However, they only produce planar 2D geometry. One approach to creating non-planar objects is to cut the object in horizontal slices and to stack and glue them. This approach, however, requires manual effort for the assembly and time for the glue to set, defeating the purpose of using a fast fabrication tool. We propose eliminating the assembly step with our system LaserStacker. The key idea is to use the laser cutter to not only cut but also to weld. Users place not one acrylic sheet, but a stack of acrylic sheets into their cutter. In a single process, LaserStacker cuts each individual layer to shape (through all layers above it), welds layers by melting material at their interface, and heals undesired cuts in higher layers. When users take out the object from the laser cutter, it is already assembled. To allow users to model stacked objects efficiently, we built an extension to a commercial 3D editor (SketchUp) that provides tools for defining which parts should be connected and which remain loose. When users hit the export button, LaserStacker converts the 3D model into cutting, welding, and healing instructions for the laser cutter. We show how LaserStacker does not only allow making static objects, such as architectural models, but also objects with moving parts and simple mechanisms, such as scissors, a simple pinball machine, and a mechanical toy with gears.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115330314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
SceneSkim: Searching and Browsing Movies Using Synchronized Captions, Scripts and Plot Summaries SceneSkim:搜索和浏览电影使用同步字幕,脚本和情节摘要
Amy Pavel, Dan B. Goldman, Bjoern Hartmann, Maneesh Agrawala
Searching for scenes in movies is a time-consuming but crucial task for film studies scholars, film professionals, and new media artists. In pilot interviews we have found that such users search for a wide variety of clips---e.g., actions, props, dialogue phrases, character performances, locations---and they return to particular scenes they have seen in the past. Today, these users find relevant clips by watching the entire movie, scrubbing the video timeline, or navigating via DVD chapter menus. Increasingly, users can also index films through transcripts---however, dialogue often lacks visual context, character names, and high level event descriptions. We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries. Our interface integrates information from such sources to allow expressive search at several levels of granularity: Captions provide access to accurate dialogue, scripts describe shot-by-shot actions and settings, and plot summaries contain high-level event descriptions. We propose new algorithms for finding word-level caption to script alignments, parsing text scripts, and aligning plot summaries to scripts. Film studies graduate students evaluating SceneSkim expressed enthusiasm about the usability of the proposed system for their research and teaching.
对于电影研究学者、电影专业人士和新媒体艺术家来说,寻找电影中的场景是一项耗时但至关重要的任务。在试点采访中,我们发现这些用户搜索各种各样的剪辑——例如。、动作、道具、对话短语、角色表演、地点——他们会回到过去看过的特定场景。如今,这些用户通过观看整部电影、浏览视频时间轴或通过DVD章节菜单导航来找到相关片段。越来越多的用户还可以通过脚本来索引电影——然而,对话通常缺乏视觉背景、角色名称和高层次的事件描述。我们介绍SceneSkim,一个使用同步字幕、脚本和情节摘要搜索和浏览电影的工具。我们的界面集成了来自这些来源的信息,以允许在几个粒度级别上进行富有表现力的搜索:字幕提供对准确对话的访问,脚本描述一个镜头接一个镜头的动作和设置,情节摘要包含高级事件描述。我们提出了新的算法,用于寻找单词级标题到脚本对齐,解析文本脚本,以及将情节摘要对齐到脚本。评估SceneSkim的电影研究研究生对该系统在他们的研究和教学中的可用性表示了热情。
{"title":"SceneSkim: Searching and Browsing Movies Using Synchronized Captions, Scripts and Plot Summaries","authors":"Amy Pavel, Dan B. Goldman, Bjoern Hartmann, Maneesh Agrawala","doi":"10.1145/2807442.2807502","DOIUrl":"https://doi.org/10.1145/2807442.2807502","url":null,"abstract":"Searching for scenes in movies is a time-consuming but crucial task for film studies scholars, film professionals, and new media artists. In pilot interviews we have found that such users search for a wide variety of clips---e.g., actions, props, dialogue phrases, character performances, locations---and they return to particular scenes they have seen in the past. Today, these users find relevant clips by watching the entire movie, scrubbing the video timeline, or navigating via DVD chapter menus. Increasingly, users can also index films through transcripts---however, dialogue often lacks visual context, character names, and high level event descriptions. We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries. Our interface integrates information from such sources to allow expressive search at several levels of granularity: Captions provide access to accurate dialogue, scripts describe shot-by-shot actions and settings, and plot summaries contain high-level event descriptions. We propose new algorithms for finding word-level caption to script alignments, parsing text scripts, and aligning plot summaries to scripts. Film studies graduate students evaluating SceneSkim expressed enthusiasm about the usability of the proposed system for their research and teaching.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"31 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120986234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
HapticPrint: Designing Feel Aesthetics for Digital Fabrication HapticPrint:为数字制造设计感觉美学
César Torres, Tim Campbell, Neil Kumar, E. Paulos
Digital fabrication has enabled massive creativity in hobbyist communities and professional product design. These emerging technologies excel at realizing an arbitrary shape or form; however these objects are often rigid and lack the feel desired by designers. We aim to enable physical haptic design in passive 3D printed objects. This paper identifies two core areas for extending physical design into digital fabrication: designing the external and internal haptic characteristics of an object. We present HapticPrint as a pair of design tools to easily modify the feel of a 3D model. Our external tool maps textures and UI elements onto arbitrary shapes, and our internal tool modifies the internal geometry of models for novel compliance and weight characteristics. We demonstrate the value of HapticPrint with a range of applications that expand the aesthetics of feel, usability, and interactivity in 3D artifacts.
数字制造在业余爱好者社区和专业产品设计中发挥了巨大的创造力。这些新兴技术擅长于实现任意形状或形式;然而,这些物品往往是刚性的,缺乏设计师所期望的感觉。我们的目标是实现被动3D打印对象的物理触觉设计。本文确定了将物理设计扩展到数字制造的两个核心领域:设计物体的外部和内部触觉特性。我们提出HapticPrint作为一对设计工具,可以轻松地修改3D模型的感觉。我们的外部工具将纹理和UI元素映射到任意形状上,我们的内部工具修改模型的内部几何形状,以获得新的遵从性和重量特性。我们通过一系列应用程序展示了HapticPrint的价值,这些应用程序扩展了3D工件的感觉,可用性和交互性的美学。
{"title":"HapticPrint: Designing Feel Aesthetics for Digital Fabrication","authors":"César Torres, Tim Campbell, Neil Kumar, E. Paulos","doi":"10.1145/2807442.2807492","DOIUrl":"https://doi.org/10.1145/2807442.2807492","url":null,"abstract":"Digital fabrication has enabled massive creativity in hobbyist communities and professional product design. These emerging technologies excel at realizing an arbitrary shape or form; however these objects are often rigid and lack the feel desired by designers. We aim to enable physical haptic design in passive 3D printed objects. This paper identifies two core areas for extending physical design into digital fabrication: designing the external and internal haptic characteristics of an object. We present HapticPrint as a pair of design tools to easily modify the feel of a 3D model. Our external tool maps textures and UI elements onto arbitrary shapes, and our internal tool modifies the internal geometry of models for novel compliance and weight characteristics. We demonstrate the value of HapticPrint with a range of applications that expand the aesthetics of feel, usability, and interactivity in 3D artifacts.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131955168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data ATK:基于3D手部追踪数据,在空中实现十指手写打字
Xin Yi, Chun Yu, M. Zhang, Sida Gao, Ke Sun, Yuanchun Shi
Ten-finger freehand mid-air typing is a potential solution for post-desktop interaction. However, the absence of tactile feedback as well as the inability to accurately distinguish tapping finger or target keys exists as the major challenge for mid-air typing. In this paper, we present ATK, a novel interaction technique that enables freehand ten-finger typing in the air based on 3D hand tracking data. Our hypothesis is that expert typists are able to transfer their typing ability from physical keyboards to mid-air typing. We followed an iterative approach in designing ATK. We first empirically investigated users' mid-air typing behavior, and examined fingertip kinematics during tapping, correlated movement among fingers and 3D distribution of tapping endpoints. Based on the findings, we proposed a probabilistic tap detection algorithm, and augmented Goodman's input correction model to account for the ambiguity in distinguishing tapping finger. We finally evaluated the performance of ATK with a 4-block study. Participants typed 23.0 WPM with an uncorrected word-level error rate of 0.3% in the first block, and later achieved 29.2 WPM in the last block without sacrificing accuracy.
十指徒手半空中打字是后桌面交互的潜在解决方案。然而,缺乏触觉反馈以及无法准确区分敲击手指或目标键是空中打字的主要挑战。在本文中,我们提出了ATK,一种新颖的交互技术,可以基于3D手部跟踪数据在空中进行徒手10指打字。我们的假设是,专业的打字员能够将他们的打字能力从物理键盘转移到空中打字。我们在设计ATK时采用了迭代方法。我们首先对用户的空中打字行为进行了实证调查,并研究了敲击过程中指尖的运动学、手指间的相关运动和敲击端点的三维分布。在此基础上,我们提出了一种概率敲击检测算法,并增强了Goodman的输入校正模型,以解决识别敲击手指的模糊性。我们最终通过一个4块研究来评估ATK的性能。参与者在第一个块中输入23.0个WPM,未纠正的词级错误率为0.3%,后来在最后一个块中达到29.2个WPM,而不牺牲准确性。
{"title":"ATK: Enabling Ten-Finger Freehand Typing in Air Based on 3D Hand Tracking Data","authors":"Xin Yi, Chun Yu, M. Zhang, Sida Gao, Ke Sun, Yuanchun Shi","doi":"10.1145/2807442.2807504","DOIUrl":"https://doi.org/10.1145/2807442.2807504","url":null,"abstract":"Ten-finger freehand mid-air typing is a potential solution for post-desktop interaction. However, the absence of tactile feedback as well as the inability to accurately distinguish tapping finger or target keys exists as the major challenge for mid-air typing. In this paper, we present ATK, a novel interaction technique that enables freehand ten-finger typing in the air based on 3D hand tracking data. Our hypothesis is that expert typists are able to transfer their typing ability from physical keyboards to mid-air typing. We followed an iterative approach in designing ATK. We first empirically investigated users' mid-air typing behavior, and examined fingertip kinematics during tapping, correlated movement among fingers and 3D distribution of tapping endpoints. Based on the findings, we proposed a probabilistic tap detection algorithm, and augmented Goodman's input correction model to account for the ambiguity in distinguishing tapping finger. We finally evaluated the performance of ATK with a 4-block study. Participants typed 23.0 WPM with an uncorrected word-level error rate of 0.3% in the first block, and later achieved 29.2 WPM in the last block without sacrificing accuracy.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116708383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues GravitySpot:使用屏幕上的视觉提示引导用户在公共显示器前
Florian Alt, A. Bulling, G. Gravanis, Daniel Buschek
Users tend to position themselves in front of interactive public displays in such a way as to best perceive its content. Currently, this sweet spot is implicitly defined by display properties, content, the input modality, as well as space constraints in front of the display. We present GravitySpot - an approach that makes sweet spots flexible by actively guiding users to arbitrary target positions in front of displays using visual cues. Such guidance is beneficial, for example, if a particular input technology only works at a specific distance or if users should be guided towards a non-crowded area of a large display. In two controlled lab studies (n=29) we evaluate different visual cues based on color, shape, and motion, as well as position-to-cue mapping functions. We show that both the visual cues and mapping functions allow for fine-grained control over positioning speed and accuracy. Findings are complemented by observations from a 3-month real-world deployment.
用户倾向于将自己定位在交互式公共显示器的前面,以便最好地感知其内容。目前,这个最佳点是由显示属性、内容、输入模式以及显示前的空间限制隐式定义的。我们提出了GravitySpot——一种通过使用视觉线索主动引导用户到显示器前的任意目标位置,从而使最佳点灵活的方法。这样的引导是有益的,例如,如果一个特定的输入技术只在一个特定的距离上工作,或者如果用户应该被引导到一个大型显示器的非拥挤区域。在两项对照实验室研究(n=29)中,我们基于颜色、形状和运动以及位置到线索映射功能评估了不同的视觉线索。我们表明,视觉提示和映射功能都允许对定位速度和精度进行细粒度控制。研究结果与为期3个月的实际部署的观察结果相辅相成。
{"title":"GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues","authors":"Florian Alt, A. Bulling, G. Gravanis, Daniel Buschek","doi":"10.1145/2807442.2807490","DOIUrl":"https://doi.org/10.1145/2807442.2807490","url":null,"abstract":"Users tend to position themselves in front of interactive public displays in such a way as to best perceive its content. Currently, this sweet spot is implicitly defined by display properties, content, the input modality, as well as space constraints in front of the display. We present GravitySpot - an approach that makes sweet spots flexible by actively guiding users to arbitrary target positions in front of displays using visual cues. Such guidance is beneficial, for example, if a particular input technology only works at a specific distance or if users should be guided towards a non-crowded area of a large display. In two controlled lab studies (n=29) we evaluate different visual cues based on color, shape, and motion, as well as position-to-cue mapping functions. We show that both the visual cues and mapping functions allow for fine-grained control over positioning speed and accuracy. Findings are complemented by observations from a 3-month real-world deployment.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128035104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
期刊
Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1