首页 > 最新文献

Proceedings of the 18th annual ACM symposium on User interface software and technology最新文献

英文 中文
DT controls: adding identity to physical interfaces DT控件:为物理接口添加身份
P. Dietz, B. Harsham, C. Forlines, D. Leigh, W. Yerazunis, S. Shipman, B. Schmidt-Nielsen, Kathy Ryall
In this paper, we show how traditional physical interface components such as switches, levers, knobs and touch screens can be easily modified to identify who is activating each control. This allows us to change the function per-formed by the control, and the sensory feedback provided by the control itself, dependent upon the user. An auditing function is also available that logs each user's actions. We describe a number of example usage scenarios for our tech-nique, and present two sample implementations.
在本文中,我们展示了传统的物理界面组件,如开关,杠杆,旋钮和触摸屏可以很容易地修改,以确定谁是激活每个控制。这允许我们根据用户改变控件的功能,以及控件本身提供的感官反馈。还可以使用审计功能记录每个用户的操作。我们描述了该技术的许多示例使用场景,并给出了两个示例实现。
{"title":"DT controls: adding identity to physical interfaces","authors":"P. Dietz, B. Harsham, C. Forlines, D. Leigh, W. Yerazunis, S. Shipman, B. Schmidt-Nielsen, Kathy Ryall","doi":"10.1145/1095034.1095075","DOIUrl":"https://doi.org/10.1145/1095034.1095075","url":null,"abstract":"In this paper, we show how traditional physical interface components such as switches, levers, knobs and touch screens can be easily modified to identify who is activating each control. This allows us to change the function per-formed by the control, and the sensory feedback provided by the control itself, dependent upon the user. An auditing function is also available that logs each user's actions. We describe a number of example usage scenarios for our tech-nique, and present two sample implementations.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122707244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis ViewPointer:轻量级的无校准眼动追踪,无处不在的免提指示
John D. Smith, Roel Vertegaal, Changuk Sohn
We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the user's eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.
我们介绍了ViewPointer,一种可穿戴的眼睛接触传感器,可以检测到嵌入在现实世界物体中的无处不在的计算机的指示。ViewPointer由一个小型的可穿戴摄像头组成,并不比普通的蓝牙耳机更引人注目。ViewPointer允许任何现实世界的对象被增强与眼睛接触感应能力,只需嵌入一个小的红外(IR)标签。头戴式摄像头通过确定标签在用户眼睛角膜上的反射是否足够靠近瞳孔来检测用户是否在看红外标签。ViewPointer不仅可以让任何物体成为眼神接触感应设备,还可以通过物体识别用户并向用户传输数据。我们提出了一种新的编码方案,用于唯一地标识ViewPointer标签,以及一种通过标签传输url的方法。我们提出了一些应用场景以及设计原则的分析。我们得出的结论是,眼神接触感应输入最好用于为行动提供上下文。
{"title":"ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis","authors":"John D. Smith, Roel Vertegaal, Changuk Sohn","doi":"10.1145/1095034.1095043","DOIUrl":"https://doi.org/10.1145/1095034.1095043","url":null,"abstract":"We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the user's eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122059944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Low-cost multi-touch sensing through frustrated total internal reflection 基于全内反射的低成本多点触控传感
Jefferson Y. Han
This paper describes a simple, inexpensive, and scalable technique for enabling high-resolution multi-touch sensing on rear-projected interactive surfaces based on frustrated total internal reflection. We review previous applications of this phenomenon to sensing, provide implementation details, discuss results from our initial prototype, and outline future directions.
本文描述了一种简单、廉价、可扩展的技术,用于在基于受挫全内反射的后投影交互表面上实现高分辨率多点触摸传感。我们回顾了以前这种现象在传感中的应用,提供了实现细节,讨论了我们最初原型的结果,并概述了未来的方向。
{"title":"Low-cost multi-touch sensing through frustrated total internal reflection","authors":"Jefferson Y. Han","doi":"10.1145/1095034.1095054","DOIUrl":"https://doi.org/10.1145/1095034.1095054","url":null,"abstract":"This paper describes a simple, inexpensive, and scalable technique for enabling high-resolution multi-touch sensing on rear-projected interactive surfaces based on frustrated total internal reflection. We review previous applications of this phenomenon to sensing, provide implementation details, discuss results from our initial prototype, and outline future directions.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127668558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1160
eyeLook: using attention to facilitate mobile media consumption eyeLook:利用注意力促进移动媒体消费
C. Dickie, Roel Vertegaal, Changuk Sohn, D. Cheng
One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.
移动媒体设备的一个问题是,它们可能会分散用户在重要的日常任务中的注意力,比如在繁忙的城市街道上导航。我们在设计eyeLook时解决了这个问题:这是一个注意力敏感的移动计算平台。eyeLook电器使用嵌入式低成本的眼接触传感器(ECS)来检测用户何时看显示屏。我们讨论了两个eyeLook应用程序,seeTV和seeTXT,它们通过使用ECS响应用户的注意力,促进了移动环境下礼貌的媒体消费。seeTV是一个细心的移动视频播放器,自动暂停内容,当用户不看。seeTXT是一个专注的快速阅读应用程序,它在显示器上闪烁单词,只有当用户正在看时才推进文本。通过使移动媒体设备对实际用户的注意力敏感,eyeLook允许应用程序在消费媒体和管理生活之间优雅地转换用户。
{"title":"eyeLook: using attention to facilitate mobile media consumption","authors":"C. Dickie, Roel Vertegaal, Changuk Sohn, D. Cheng","doi":"10.1145/1095034.1095050","DOIUrl":"https://doi.org/10.1145/1095034.1095050","url":null,"abstract":"One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134116189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
PapierCraft: a command system for interactive paper paperercraft:交互式纸张的命令系统
Chunyuan Liao, François Guimbretière, K. Hinckley
Knowledge workers use paper extensively for document reviewing and note-taking due to its versatility and simplicity of use. As users annotate printed documents and gather notes, they create a rich web of annotations and cross references. Unfortunately, as paper is a static media, this web often gets trapped in the physical world. While several digital solutions such as XLibris [15] and Digital Desk [18] have been proposed, they suffer from a small display size or onerous hardware requirements.To address these limitations, we propose PapierCraft, a gesture-based interface that allows users to manipulate digital documents directly using their printouts as proxies. Using a digital pen, users can annotate a printout or draw command gestures to indicate operations such as copying a document area, pasting an area previously copied, or creating a link. Upon pen synchronization, our infrastructure executes these commands and presents the result in a customized viewer. In this paper we describe the design and implementation of the PapierCraft command system, and report on early user feedback.
由于纸张的多功能性和简单性,知识工作者广泛使用它来审查文档和做笔记。当用户注释打印文档并收集笔记时,他们创建了一个丰富的注释和交叉引用网络。不幸的是,由于纸是一种静态媒体,这个网络经常被困在物理世界中。虽然已经提出了一些数字解决方案,如XLibris[15]和digital Desk[18],但它们都存在显示尺寸小或硬件要求繁重的问题。为了解决这些限制,我们提出了paperercraft,一个基于手势的界面,允许用户直接使用他们的打印输出作为代理来操作数字文档。使用数字笔,用户可以对打印输出进行注释或绘制命令手势,以指示复制文档区域、粘贴先前复制的区域或创建链接等操作。在笔同步时,我们的基础结构执行这些命令并在定制的查看器中显示结果。在本文中,我们描述了paperercraft命令系统的设计和实现,并报告了早期用户反馈。
{"title":"PapierCraft: a command system for interactive paper","authors":"Chunyuan Liao, François Guimbretière, K. Hinckley","doi":"10.1145/1095034.1095074","DOIUrl":"https://doi.org/10.1145/1095034.1095074","url":null,"abstract":"Knowledge workers use paper extensively for document reviewing and note-taking due to its versatility and simplicity of use. As users annotate printed documents and gather notes, they create a rich web of annotations and cross references. Unfortunately, as paper is a static media, this web often gets trapped in the physical world. While several digital solutions such as XLibris [15] and Digital Desk [18] have been proposed, they suffer from a small display size or onerous hardware requirements.To address these limitations, we propose PapierCraft, a gesture-based interface that allows users to manipulate digital documents directly using their printouts as proxies. Using a digital pen, users can annotate a printout or draw command gestures to indicate operations such as copying a document area, pasting an area previously copied, or creating a link. Upon pen synchronization, our infrastructure executes these commands and presents the result in a customized viewer. In this paper we describe the design and implementation of the PapierCraft command system, and report on early user feedback.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115534563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
Supporting interaction in augmented reality in the presence of uncertain spatial knowledge 支持在不确定空间知识存在的增强现实中的交互
E. M. Coelho, B. MacIntyre, S. Julier
A significant problem encountered when building Augmented Reality (AR) systems is that all spatial knowledge about the world has uncertainty associated with it. This uncertainty manifests itself as registration errors between the graphics and the physical world, and ambiguity in user interaction. In this paper, we show how estimates of the registration error can be leveraged to support predictable selection in the presence of uncertain 3D knowledge. These ideas are demonstrated in osgAR, an extension to OpenSceneGraph with explicit support for uncertainty in the 3D transformations. The osgAR runtime propagates this uncertainty throughout the scene graph to compute robust estimates of the probable location of all entities in the system from the user's viewpoint, in real-time. We discuss the implementation of selection in osgAR, and the issues that must be addressed when creating interaction techniques in such a system.
在构建增强现实(AR)系统时遇到的一个重要问题是,所有关于世界的空间知识都具有与之相关的不确定性。这种不确定性表现为图形和物理世界之间的注册错误,以及用户交互中的模糊性。在本文中,我们展示了如何利用配准误差的估计来支持存在不确定3D知识的可预测选择。osgAR是OpenSceneGraph的扩展,明确支持3D转换中的不确定性。osgAR运行时将这种不确定性传播到整个场景图中,以实时地从用户的角度计算系统中所有实体的可能位置的可靠估计。我们讨论了osgAR中选择的实现,以及在这样一个系统中创建交互技术时必须解决的问题。
{"title":"Supporting interaction in augmented reality in the presence of uncertain spatial knowledge","authors":"E. M. Coelho, B. MacIntyre, S. Julier","doi":"10.1145/1095034.1095052","DOIUrl":"https://doi.org/10.1145/1095034.1095052","url":null,"abstract":"A significant problem encountered when building Augmented Reality (AR) systems is that all spatial knowledge about the world has uncertainty associated with it. This uncertainty manifests itself as registration errors between the graphics and the physical world, and ambiguity in user interaction. In this paper, we show how estimates of the registration error can be leveraged to support predictable selection in the presence of uncertain 3D knowledge. These ideas are demonstrated in osgAR, an extension to OpenSceneGraph with explicit support for uncertainty in the 3D transformations. The osgAR runtime propagates this uncertainty throughout the scene graph to compute robust estimates of the probable location of all entities in the system from the user's viewpoint, in real-time. We discuss the implementation of selection in osgAR, and the issues that must be addressed when creating interaction techniques in such a system.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automation and customization of rendered web pages 渲染网页的自动化和定制
Michael Bolin, Matt Webber, P. Rha, Tom Wilson, Rob Miller
On the desktop, an application can expect to control its user interface down to the last pixel, but on the World Wide Web, a content provider has no control over how the client will view the page, once delivered to the browser. This creates an opportunity for end-users who want to automate and customize their web experiences, but the growing complexity of web pages and standards prevents most users from realizing this opportunity. We describe Chickenfoot, a programming system embedded in the Firefox web browser, which enables end-users to automate, customize, and integrate web applications without examining their source code. One way Chickenfoot addresses this goal is a novel technique for identifying page components by keyword pattern matching. We motivate this technique by studying how users name web page components, and present a heuristic keyword matching algorithm that identifies the desired component from the user's name.
在桌面上,应用程序可以期望控制其用户界面到最后一个像素,但是在万维网上,内容提供者无法控制客户端如何查看页面,一旦交付给浏览器。这为想要自动化和定制网络体验的最终用户创造了机会,但是网页和标准的日益复杂使大多数用户无法意识到这一机会。我们描述了一个嵌入Firefox网页浏览器的编程系统Chickenfoot,它使最终用户能够自动化、定制和集成web应用程序,而无需检查它们的源代码。Chickenfoot实现这一目标的一种方法是通过关键字模式匹配识别页面组件的新技术。我们通过研究用户如何命名网页组件来激发这种技术,并提出了一种启发式关键字匹配算法,该算法可以从用户的名称中识别所需的组件。
{"title":"Automation and customization of rendered web pages","authors":"Michael Bolin, Matt Webber, P. Rha, Tom Wilson, Rob Miller","doi":"10.1145/1095034.1095062","DOIUrl":"https://doi.org/10.1145/1095034.1095062","url":null,"abstract":"On the desktop, an application can expect to control its user interface down to the last pixel, but on the World Wide Web, a content provider has no control over how the client will view the page, once delivered to the browser. This creates an opportunity for end-users who want to automate and customize their web experiences, but the growing complexity of web pages and standards prevents most users from realizing this opportunity. We describe Chickenfoot, a programming system embedded in the Firefox web browser, which enables end-users to automate, customize, and integrate web applications without examining their source code. One way Chickenfoot addresses this goal is a novel technique for identifying page components by keyword pattern matching. We motivate this technique by studying how users name web page components, and present a heuristic keyword matching algorithm that identifies the desired component from the user's name.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115740894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 260
Physical embodiments for mobile communication agents 移动通信代理的物理实施例
Stefan Marti, C. Schmandt
This paper describes a physically embodied and animated user interface to an interactive call handling agent, consisting of a small wireless animatronic device in the form of a squirrel, bunny, or parrot. A software tool creates movement primitives, composes these primitives into complex behaviors, and triggers these behaviors dynamically at state changes in the conversational agent's finite state machine. Gaze and gestural cues from the animatronics alert both the user and co-located third parties of incoming phone calls, and data suggests that such alerting is less intrusive than conventional telephones.
本文描述了一个交互式呼叫处理代理的物理具体化和动画用户界面,该代理由一个小的无线电子动画设备组成,其形式为松鼠、兔子或鹦鹉。软件工具创建运动原语,将这些原语组合成复杂的行为,并在会话代理的有限状态机的状态变化时动态触发这些行为。来自电子动画的凝视和手势提示提醒用户和同处一处的第三方来电,数据表明,这种提醒比传统电话的侵入性更小。
{"title":"Physical embodiments for mobile communication agents","authors":"Stefan Marti, C. Schmandt","doi":"10.1145/1095034.1095073","DOIUrl":"https://doi.org/10.1145/1095034.1095073","url":null,"abstract":"This paper describes a physically embodied and animated user interface to an interactive call handling agent, consisting of a small wireless animatronic device in the form of a squirrel, bunny, or parrot. A software tool creates movement primitives, composes these primitives into complex behaviors, and triggers these behaviors dynamically at state changes in the conversational agent's finite state machine. Gaze and gestural cues from the animatronics alert both the user and co-located third parties of incoming phone calls, and data suggests that such alerting is less intrusive than conventional telephones.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124842776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Supporting interspecies social awareness: using peripheral displays for distributed pack awareness 支持物种间的社会意识:使用外围显示器进行分布式群体意识
Demi Mankoff, A. Dey, Jennifer Mankoff, K. Mankoff
In interspecies households, it is common for the non homo sapien members to be isolated and ignored for many hours each day when humans are out of the house or working. For pack animals, such as canines, information about a pack member's extended pack interactions (outside of the nuclear household) could help to mitigate this social isolation. We have developed a Pack Activity Watch System: Allowing Broad Interspecies Love In Telecommunication with Internet-Enabled Sociability (PAWSABILITIES) for helping to support remote awareness of social activities. Our work focuses on canine companions, and includes, pawticipatory design, labradory tests, and canid camera monitoring.
在跨物种家庭中,当人类外出或工作时,非智人成员每天被隔离和忽视好几个小时是很常见的。对于群居动物,如犬科动物,关于群体成员扩大群体互动(在核心家庭之外)的信息可以帮助减轻这种社会隔离。我们已经开发了一个群体活动观察系统:允许广泛的物种间的爱在电信与互联网支持的社交能力(爪能力),以帮助支持社会活动的远程感知。我们的工作重点是犬类同伴,包括参与式设计、实验室测试和犬类摄像机监测。
{"title":"Supporting interspecies social awareness: using peripheral displays for distributed pack awareness","authors":"Demi Mankoff, A. Dey, Jennifer Mankoff, K. Mankoff","doi":"10.1145/1095034.1095076","DOIUrl":"https://doi.org/10.1145/1095034.1095076","url":null,"abstract":"In interspecies households, it is common for the non homo sapien members to be isolated and ignored for many hours each day when humans are out of the house or working. For pack animals, such as canines, information about a pack member's extended pack interactions (outside of the nuclear household) could help to mitigate this social isolation. We have developed a Pack Activity Watch System: Allowing Broad Interspecies Love In Telecommunication with Internet-Enabled Sociability (PAWSABILITIES) for helping to support remote awareness of social activities. Our work focuses on canine companions, and includes, pawticipatory design, labradory tests, and canid camera monitoring.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127893839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Moveable interactive projected displays using projector based tracking 使用基于投影仪跟踪的可移动交互式投影显示器
J. C. Lee, S. Hudson, J. Summet, P. Dietz
Video projectors have typically been used to display images on surfaces whose geometric relationship to the projector remains constant, such as walls or pre-calibrated surfaces. In this paper, we present a technique for projecting content onto moveable surfaces that adapts to the motion and location of the surface to simulate an active display. This is accomplished using a projector based location tracking techinque. We use light sensors embedded into the moveable surface and project low-perceptibility Gray-coded patterns to first discover the sensor locations, and then incrementally track them at interactive rates. We describe how to reduce the perceptibility of tracking patterns, achieve interactive tracking rates, use motion modeling to improve tracking performance, and respond to sensor occlusions. A group of tracked sensors can define quadrangles for simulating moveable displays while single sensors can be used as control inputs. By unifying the tracking and display technology into a single mechanism, we can substantially reduce the cost and complexity of implementing applications that combine motion tracking and projected imagery.
视频投影仪通常用于在与投影仪的几何关系保持不变的表面上显示图像,例如墙壁或预先校准的表面。在本文中,我们提出了一种将内容投影到可移动表面上的技术,该技术适应表面的运动和位置,以模拟主动显示。这是使用基于投影仪的位置跟踪技术完成的。我们使用嵌入可移动表面的光传感器和投影低可感知的灰色编码模式来首先发现传感器位置,然后以交互速率增量跟踪它们。我们描述了如何降低跟踪模式的可感知性,实现交互式跟踪速率,使用运动建模来提高跟踪性能,并响应传感器遮挡。一组跟踪传感器可以定义四边形来模拟可移动的显示器,而单个传感器可以用作控制输入。通过将跟踪和显示技术统一到一个单一的机制中,我们可以大大降低实现运动跟踪和投影图像相结合的应用程序的成本和复杂性。
{"title":"Moveable interactive projected displays using projector based tracking","authors":"J. C. Lee, S. Hudson, J. Summet, P. Dietz","doi":"10.1145/1095034.1095045","DOIUrl":"https://doi.org/10.1145/1095034.1095045","url":null,"abstract":"Video projectors have typically been used to display images on surfaces whose geometric relationship to the projector remains constant, such as walls or pre-calibrated surfaces. In this paper, we present a technique for projecting content onto moveable surfaces that adapts to the motion and location of the surface to simulate an active display. This is accomplished using a projector based location tracking techinque. We use light sensors embedded into the moveable surface and project low-perceptibility Gray-coded patterns to first discover the sensor locations, and then incrementally track them at interactive rates. We describe how to reduce the perceptibility of tracking patterns, achieve interactive tracking rates, use motion modeling to improve tracking performance, and respond to sensor occlusions. A group of tracked sensors can define quadrangles for simulating moveable displays while single sensors can be used as control inputs. By unifying the tracking and display technology into a single mechanism, we can substantially reduce the cost and complexity of implementing applications that combine motion tracking and projected imagery.","PeriodicalId":101797,"journal":{"name":"Proceedings of the 18th annual ACM symposium on User interface software and technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123582498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
期刊
Proceedings of the 18th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1