首页 > 最新文献

Proceedings of the 26th annual ACM symposium on User interface software and technology最新文献

英文 中文
Fiberio: a touchscreen that senses fingerprints Fiberio:感应指纹的触摸屏
Christian Holz, Patrick Baudisch
We present Fiberio, a rear-projected multitouch table that identifies users biometrically based on their fingerprints during each touch interaction. Fiberio accomplishes this using a new type of screen material: a large fiber optic plate. The plate diffuses light on transmission, thereby allowing it to act as projection surface. At the same time, the plate reflects light specularly, which produces the contrast required for fingerprint sensing. In addition to offering all the functionality known from traditional diffused illumination systems, Fiberio is the first interactive tabletop system that authenticates users during touch interaction-unobtrusively and securely using the biometric features of fingerprints, which eliminates the need for users to carry any identification tokens.
我们提出了Fiberio,一个后投影的多点触摸表,在每次触摸交互过程中根据用户的指纹识别用户的生物特征。Fiberio使用一种新型的屏幕材料:一种大型光纤板来实现这一目标。该板在传输时使光漫射,从而使其充当投影面。同时,该板镜面反射光线,从而产生指纹感应所需的对比度。除了提供传统漫射照明系统的所有功能外,Fiberio是第一个在触摸交互过程中对用户进行身份验证的交互式桌面系统-使用指纹的生物特征不引人注目且安全,这消除了用户携带任何身份令牌的需要。
{"title":"Fiberio: a touchscreen that senses fingerprints","authors":"Christian Holz, Patrick Baudisch","doi":"10.1145/2501988.2502021","DOIUrl":"https://doi.org/10.1145/2501988.2502021","url":null,"abstract":"We present Fiberio, a rear-projected multitouch table that identifies users biometrically based on their fingerprints during each touch interaction. Fiberio accomplishes this using a new type of screen material: a large fiber optic plate. The plate diffuses light on transmission, thereby allowing it to act as projection surface. At the same time, the plate reflects light specularly, which produces the contrast required for fingerprint sensing. In addition to offering all the functionality known from traditional diffused illumination systems, Fiberio is the first interactive tabletop system that authenticates users during touch interaction-unobtrusively and securely using the biometric features of fingerprints, which eliminates the need for users to carry any identification tokens.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133516424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
Session details: GUI 会话细节:GUI
Wilmot Li
{"title":"Session details: GUI","authors":"Wilmot Li","doi":"10.1145/3254705","DOIUrl":"https://doi.org/10.1145/3254705","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130584953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Humans and the coming machine revolution 人类和即将到来的机器革命
R. D’Andrea
The key components of feedback control systems -- sensors, actuators, computation, power, and communication -- are continually becoming smaller, lighter, more robust, higher performance, and less expensive. By using appropriate algorithms and system architectures, it is thus becoming possible to "close the loop" on almost any machine, and to create new capabilities that fully exploit their dynamic potential. In this talk I will discuss various projects -- involving mobile robots, flying machines, an autonomous table, and actuated wingsuits -- where these new machine competencies are interfaced with the ultimate dynamic entities: human beings.
反馈控制系统的关键组件——传感器、执行器、计算、电源和通信——正不断变得更小、更轻、更健壮、性能更高、更便宜。通过使用适当的算法和系统架构,几乎可以在任何机器上实现“闭环”,并创造出充分利用其动态潜力的新功能。在这次演讲中,我将讨论各种各样的项目——包括移动机器人、飞行器、自动桌子和驱动翼装——这些新的机器能力与最终的动态实体——人类——相结合。
{"title":"Humans and the coming machine revolution","authors":"R. D’Andrea","doi":"10.1145/2501988.2508466","DOIUrl":"https://doi.org/10.1145/2501988.2508466","url":null,"abstract":"The key components of feedback control systems -- sensors, actuators, computation, power, and communication -- are continually becoming smaller, lighter, more robust, higher performance, and less expensive. By using appropriate algorithms and system architectures, it is thus becoming possible to \"close the loop\" on almost any machine, and to create new capabilities that fully exploit their dynamic potential. In this talk I will discuss various projects -- involving mobile robots, flying machines, an autonomous table, and actuated wingsuits -- where these new machine competencies are interfaced with the ultimate dynamic entities: human beings.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126653844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A mixed-initiative tool for designing level progressions in games 用于设计游戏关卡进程的混合主动工具
Eric Butler, Adam M. Smith, Yun-En Liu, Zoran Popovic
Creating game content requires balancing design considerations at multiple scales: each level requires effort and iteration to produce, and broad-scale constraints such as the order in which game concepts are introduced must be respected. Game designers currently create informal plans for how the game's levels will fit together, but they rarely keep these plans up-to-date when levels change during iteration and testing. This leads to violations of constraints and makes changing the high-level plans expensive. To address these problems, we explore the creation of mixed-initiative game progression authoring tools which explicitly model broad-scale design considerations. These tools let the designer specify constraints on progressions, and keep the plan synchronized when levels are edited. This enables the designer to move between broad and narrow-scale editing and allows for automatic detection of problems caused by edits to levels. We further leverage advances in procedural content generation to help the designer rapidly explore and test game progressions. We present a prototype implementation of such a tool for our actively-developed educational game, Refraction. We also describe how this system could be extended for use in other games and domains, specifically for the domains of math problem sets and interactive programming tutorials.
创造游戏内容需要在多个尺度上平衡设计考虑:每个关卡都需要努力和迭代来制作,并且必须尊重游戏概念引入的顺序等广泛的限制。游戏设计师目前会制定非正式的游戏关卡组合计划,但当关卡在迭代和测试过程中发生变化时,他们很少更新这些计划。这将导致违反约束,并使更改高级计划的代价高昂。为了解决这些问题,我们探索了混合主动游戏进程创作工具的创造,这些工具明确地模拟了广泛的设计考虑。这些工具让设计师能够明确进度限制,并在编辑关卡时保持计划同步。这使设计人员能够在大范围和小范围编辑之间切换,并允许自动检测由编辑级别引起的问题。我们进一步利用程序内容生成的优势来帮助设计师快速探索和测试游戏进程。我们在我们积极开发的教育游戏《Refraction》中呈现了这种工具的原型实现。我们还描述了如何将该系统扩展到其他游戏和领域,特别是数学问题集和交互式编程教程领域。
{"title":"A mixed-initiative tool for designing level progressions in games","authors":"Eric Butler, Adam M. Smith, Yun-En Liu, Zoran Popovic","doi":"10.1145/2501988.2502011","DOIUrl":"https://doi.org/10.1145/2501988.2502011","url":null,"abstract":"Creating game content requires balancing design considerations at multiple scales: each level requires effort and iteration to produce, and broad-scale constraints such as the order in which game concepts are introduced must be respected. Game designers currently create informal plans for how the game's levels will fit together, but they rarely keep these plans up-to-date when levels change during iteration and testing. This leads to violations of constraints and makes changing the high-level plans expensive. To address these problems, we explore the creation of mixed-initiative game progression authoring tools which explicitly model broad-scale design considerations. These tools let the designer specify constraints on progressions, and keep the plan synchronized when levels are edited. This enables the designer to move between broad and narrow-scale editing and allows for automatic detection of problems caused by edits to levels. We further leverage advances in procedural content generation to help the designer rapidly explore and test game progressions. We present a prototype implementation of such a tool for our actively-developed educational game, Refraction. We also describe how this system could be extended for use in other games and domains, specifically for the domains of math problem sets and interactive programming tutorials.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126029125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
UltraHaptics: multi-point mid-air haptic feedback for touch surfaces UltraHaptics:多点空中触觉反馈触摸表面
Thomas Carter, S. A. Seah, Benjamin Long, B. Drinkwater, S. Subramanian
We introduce UltraHaptics, a system designed to provide multi-point haptic feedback above an interactive surface. UltraHaptics employs focused ultrasound to project discrete points of haptic feedback through the display and directly on to users' unadorned hands. We investigate the desirable properties of an acoustically transparent display and demonstrate that the system is capable of creating multiple localised points of feedback in mid-air. Through psychophysical experiments we show that feedback points with different tactile properties can be identified at smaller separations. We also show that users are able to distinguish between different vibration frequencies of non-contact points with training. Finally, we explore a number of exciting new interaction possibilities that UltraHaptics provides.
我们介绍了UltraHaptics,这是一个设计用于在交互表面上提供多点触觉反馈的系统。UltraHaptics采用聚焦超声,通过显示器将触觉反馈的离散点直接投射到用户朴素的手上。我们研究了声透明显示器的理想特性,并证明该系统能够在半空中创建多个局部反馈点。通过心理物理实验,我们发现可以在较小的距离上识别具有不同触觉特性的反馈点。我们还表明,通过训练,用户能够区分非接触点的不同振动频率。最后,我们探索了UltraHaptics提供的一些令人兴奋的新交互可能性。
{"title":"UltraHaptics: multi-point mid-air haptic feedback for touch surfaces","authors":"Thomas Carter, S. A. Seah, Benjamin Long, B. Drinkwater, S. Subramanian","doi":"10.1145/2501988.2502018","DOIUrl":"https://doi.org/10.1145/2501988.2502018","url":null,"abstract":"We introduce UltraHaptics, a system designed to provide multi-point haptic feedback above an interactive surface. UltraHaptics employs focused ultrasound to project discrete points of haptic feedback through the display and directly on to users' unadorned hands. We investigate the desirable properties of an acoustically transparent display and demonstrate that the system is capable of creating multiple localised points of feedback in mid-air. Through psychophysical experiments we show that feedback points with different tactile properties can be identified at smaller separations. We also show that users are able to distinguish between different vibration frequencies of non-contact points with training. Finally, we explore a number of exciting new interaction possibilities that UltraHaptics provides.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116621153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 427
StickEar: making everyday objects respond to sound 让日常物品对声音做出反应
Kian Peen Yeo, Suranga Nanayakkara, Shanaka Ransiri
This paper presents StickEar, a system consisting of a network of distributed 'Sticker-like' sound-based sensor nodes to propose a means of enabling sound-based interactions on everyday objects. StickEar encapsulates wireless sensor network technology into a form factor that is intuitive to reuse and redeploy. Each StickEar sensor node consists of a miniature sized microphone and speaker to provide sound-based input/output capabilities. We provide a discussion of interaction design space and hardware design space of StickEar that cuts across domains such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. We implemented three applications to demonstrate the unique interaction capabilities of StickEar.
本文介绍了StickEar,这是一个由分布式“类似贴纸”的基于声音的传感器节点网络组成的系统,提出了一种在日常物品上实现基于声音的交互的方法。StickEar将无线传感器网络技术封装在一个易于重用和重新部署的外形中。每个StickEar传感器节点由一个微型麦克风和扬声器组成,以提供基于声音的输入/输出功能。我们讨论了StickEar的交互设计空间和硬件设计空间,这些空间跨越了远程声音监测、远程声音触发、对声音事件的自主响应以及使用声音控制数字设备等领域。我们实现了三个应用程序来展示StickEar独特的交互功能。
{"title":"StickEar: making everyday objects respond to sound","authors":"Kian Peen Yeo, Suranga Nanayakkara, Shanaka Ransiri","doi":"10.1145/2501988.2502019","DOIUrl":"https://doi.org/10.1145/2501988.2502019","url":null,"abstract":"This paper presents StickEar, a system consisting of a network of distributed 'Sticker-like' sound-based sensor nodes to propose a means of enabling sound-based interactions on everyday objects. StickEar encapsulates wireless sensor network technology into a form factor that is intuitive to reuse and redeploy. Each StickEar sensor node consists of a miniature sized microphone and speaker to provide sound-based input/output capabilities. We provide a discussion of interaction design space and hardware design space of StickEar that cuts across domains such as remote sound monitoring, remote triggering of sound, autonomous response to sound events, and controlling of digital devices using sound. We implemented three applications to demonstrate the unique interaction capabilities of StickEar.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130823860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Improving structured data entry on mobile devices 改进移动设备上的结构化数据输入
K. Chang, B. Myers, Gene M. Cahill, Soumya Simanta, E. Morris, G. Lewis
Structure makes data more useful, but also makes data entry more cumbersome. Studies have found that this is especially true on mobile devices, as mobile users often reject structured personal information management tools because the structure is too restrictive and makes entering data slower. To overcome these problems, we introduce a new data entry technique that lets users create customized structured data in an unstructured manner. We use a novel notepad-like editing interface with built-in data detectors that allow users to specify structured data implicitly and reuse the structures when desired. To minimize the amount of typing, it provides intelligent, context-sensitive autocomplete suggestions using personal and public databases that contain candidate information to be entered. We implemented these mechanisms in an example application called Listpad. Our evaluation shows that people using Listpad create customized structured data 16% faster than using a conventional mobile database tool. The speed further increases to 42% when the fields can be autocompleted.
结构使数据更有用,但也使数据输入更麻烦。研究发现,在移动设备上尤其如此,因为移动用户经常拒绝结构化的个人信息管理工具,因为这种结构限制太大,使输入数据的速度变慢。为了克服这些问题,我们引入了一种新的数据输入技术,允许用户以非结构化的方式创建自定义结构化数据。我们使用了一个新颖的类似记事本的编辑界面,它带有内置的数据检测器,允许用户隐式地指定结构化数据并在需要时重用这些结构。为了尽量减少输入量,它使用包含要输入的候选信息的个人和公共数据库提供智能的、上下文敏感的自动补全建议。我们在一个名为Listpad的示例应用程序中实现了这些机制。我们的评估显示,使用Listpad创建自定义结构化数据的速度比使用传统移动数据库工具快16%。当字段可以自动完成时,速度进一步提高到42%。
{"title":"Improving structured data entry on mobile devices","authors":"K. Chang, B. Myers, Gene M. Cahill, Soumya Simanta, E. Morris, G. Lewis","doi":"10.1145/2501988.2502043","DOIUrl":"https://doi.org/10.1145/2501988.2502043","url":null,"abstract":"Structure makes data more useful, but also makes data entry more cumbersome. Studies have found that this is especially true on mobile devices, as mobile users often reject structured personal information management tools because the structure is too restrictive and makes entering data slower. To overcome these problems, we introduce a new data entry technique that lets users create customized structured data in an unstructured manner. We use a novel notepad-like editing interface with built-in data detectors that allow users to specify structured data implicitly and reuse the structures when desired. To minimize the amount of typing, it provides intelligent, context-sensitive autocomplete suggestions using personal and public databases that contain candidate information to be entered. We implemented these mechanisms in an example application called Listpad. Our evaluation shows that people using Listpad create customized structured data 16% faster than using a conventional mobile database tool. The speed further increases to 42% when the fields can be autocompleted.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130863249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Proceedings of the 26th annual ACM symposium on User interface software and technology 第26届年度ACM用户界面软件与技术研讨会论文集
S. Izadi, A. Quigley, I. Poupyrev, T. Igarashi
It is our pleasure to welcome you to the 26th Annual ACM Symposium on User Interface Software and Technology (UIST) 2013, held from October 8-11th, in the historic town and University of St Andrews, Scotland, United Kingdom. UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, new input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, and computer supported cooperative work. The single-track program and intimate size, makes UIST 2013 an ideal place to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations. We received a record 317 paper submissions from more than 30 countries. After a thorough review process, the program committee accepted 62 papers (19.5%). Each anonymous submission was first reviewed by three external reviewers, and meta-reviews were provided by two program committee members. If any of the five reviewers deemed a submission to pass a rejection threshold we asked the authors to submit a short rebuttal addressing the reviewers' concerns. The program committee met in person in Pittsburgh, PA, on May 30-31, 2013, to select the papers for the conference. Submissions were finally accepted only after the authors provided a final revision addressing the committee's comments. In addition to the presentations of accepted papers, this year's program includes a keynote by Raffaello D'Andrea (ETH Zurich) on feedback control systems for autonomous machines. A great line up of posters, demos, (the ninth) annual Doctoral Symposium, and (the fifth) annual Student Innovation Contest (this year focusing on programmable water pumps called Pumpspark) complete the program. We hope you enjoy all aspects of the UIST 2013 program, and that you get to enjoy our wonderful venues and that your discussions and interactions prove fruitful.
我们很高兴欢迎您参加2013年10月8日至11日在英国苏格兰历史悠久的圣安德鲁斯大学举行的第26届ACM用户界面软件与技术年度研讨会。UIST是展示软件和人机界面技术研究创新的主要论坛。UIST由ACM计算机人机交互(SIGCHI)和计算机图形学(SIGGRAPH)特别兴趣小组赞助,汇集了来自许多领域的研究人员和从业人员,包括网络和图形界面,新的输入和输出设备,信息可视化,传感技术,交互式显示,桌面和有形计算,交互技术,增强和虚拟现实,泛在计算和计算机支持的协同工作。单轨计划和亲密的规模,使UIST 2013成为交流用户界面研究前沿成果的理想场所,会见朋友和同事,并建立未来的合作。我们收到了创纪录的来自30多个国家的317篇论文。经过全面的审查,计划委员会接受了62篇论文(19.5%)。每个匿名提交首先由三名外部审稿人审查,然后由两名项目委员会成员提供元审查。如果五位审稿人中的任何一位认为提交的论文通过了拒绝门槛,我们会要求作者提交一份简短的反驳,以解决审稿人所关注的问题。项目委员会于2013年5月30日至31日在宾夕法尼亚州匹兹堡亲自开会,为会议选择论文。只有在作者提供了针对委员会意见的最终修订后,提交的材料才最终被接受。除了被接受的论文的介绍,今年的计划还包括Raffaello D'Andrea(苏黎世联邦理工学院)关于自主机器反馈控制系统的主题演讲。一系列的海报、演示、(第九届)年度博士研讨会和(第五届)年度学生创新竞赛(今年的重点是名为Pumpspark的可编程水泵)完成了这个项目。我们希望你们喜欢UIST 2013项目的各个方面,希望你们能享受我们精彩的场地,希望你们的讨论和互动富有成果。
{"title":"Proceedings of the 26th annual ACM symposium on User interface software and technology","authors":"S. Izadi, A. Quigley, I. Poupyrev, T. Igarashi","doi":"10.1145/2501988","DOIUrl":"https://doi.org/10.1145/2501988","url":null,"abstract":"It is our pleasure to welcome you to the 26th Annual ACM Symposium on User Interface Software and Technology (UIST) 2013, held from October 8-11th, in the historic town and University of St Andrews, Scotland, United Kingdom. \u0000 \u0000UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, new input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, and computer supported cooperative work. The single-track program and intimate size, makes UIST 2013 an ideal place to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations. \u0000 \u0000We received a record 317 paper submissions from more than 30 countries. After a thorough review process, the program committee accepted 62 papers (19.5%). Each anonymous submission was first reviewed by three external reviewers, and meta-reviews were provided by two program committee members. If any of the five reviewers deemed a submission to pass a rejection threshold we asked the authors to submit a short rebuttal addressing the reviewers' concerns. The program committee met in person in Pittsburgh, PA, on May 30-31, 2013, to select the papers for the conference. Submissions were finally accepted only after the authors provided a final revision addressing the committee's comments. \u0000 \u0000In addition to the presentations of accepted papers, this year's program includes a keynote by Raffaello D'Andrea (ETH Zurich) on feedback control systems for autonomous machines. A great line up of posters, demos, (the ninth) annual Doctoral Symposium, and (the fifth) annual Student Innovation Contest (this year focusing on programmable water pumps called Pumpspark) complete the program. We hope you enjoy all aspects of the UIST 2013 program, and that you get to enjoy our wonderful venues and that your discussions and interactions prove fruitful.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132002393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Pursuit calibration: making gaze calibration less tedious and more flexible 追踪校准:使凝视校准不那么繁琐,更灵活
Ken Pfeuffer, Mélodie Vidal, J. Turner, A. Bulling, Hans-Werner Gellersen
Eye gaze is a compelling interaction modality but requires user calibration before interaction can commence. State of the art procedures require the user to fixate on a succession of calibration markers, a task that is often experienced as difficult and tedious. We present pursuit calibration, a novel approach that, unlike existing methods, is able to detect the user's attention to a calibration target. This is achieved by using moving targets, and correlation of eye movement and target trajectory, implicitly exploiting smooth pursuit eye movement. Data for calibration is then only sampled when the user is attending to the target. Because of its ability to detect user attention, pursuit calibration can be performed implicitly, which enables more flexible designs of the calibration task. We demonstrate this in application examples and user studies, and show that pursuit calibration is tolerant to interruption, can blend naturally with applications and is able to calibrate users without their awareness.
眼睛注视是一种引人注目的交互方式,但在交互开始之前需要用户校准。最先进的程序要求用户将注意力集中在一系列校准标记上,这通常是一项困难而乏味的任务。我们提出了一种新的跟踪校准方法,与现有的方法不同,它能够检测用户对校准目标的注意力。这是通过利用运动目标,眼球运动和目标轨迹的相关性,隐含地利用平滑的眼球运动来实现的。只有当用户注视目标时,才对校准数据进行采样。由于其检测用户注意力的能力,跟踪校准可以隐式执行,这使得校准任务的设计更加灵活。我们在应用实例和用户研究中证明了这一点,并表明追求校准可以容忍中断,可以自然地与应用融合,并且能够在用户不知情的情况下进行校准。
{"title":"Pursuit calibration: making gaze calibration less tedious and more flexible","authors":"Ken Pfeuffer, Mélodie Vidal, J. Turner, A. Bulling, Hans-Werner Gellersen","doi":"10.1145/2501988.2501998","DOIUrl":"https://doi.org/10.1145/2501988.2501998","url":null,"abstract":"Eye gaze is a compelling interaction modality but requires user calibration before interaction can commence. State of the art procedures require the user to fixate on a succession of calibration markers, a task that is often experienced as difficult and tedious. We present pursuit calibration, a novel approach that, unlike existing methods, is able to detect the user's attention to a calibration target. This is achieved by using moving targets, and correlation of eye movement and target trajectory, implicitly exploiting smooth pursuit eye movement. Data for calibration is then only sampled when the user is attending to the target. Because of its ability to detect user attention, pursuit calibration can be performed implicitly, which enables more flexible designs of the calibration task. We demonstrate this in application examples and user studies, and show that pursuit calibration is tolerant to interruption, can blend naturally with applications and is able to calibrate users without their awareness.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132221513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 155
DigiTaps: eyes-free number entry on touchscreens with minimal audio feedback DigiTaps:触摸屏上的数字输入,无需眼睛,音频反馈最小
Shiri Azenkot, Cynthia L. Bennett, R. Ladner
Eyes-free input usually relies on audio feedback that can be difficult to hear in noisy environments. We present DigiTaps, an eyes-free number entry method for touchscreen devices that requires little auditory attention. To enter a digit, users tap or swipe anywhere on the screen with one, two, or three fingers. The 10 digits are encoded by combinations of these gestures that relate to the digits' semantics. For example, the digit 2 is input with a 2-finger tap. We conducted a longitudinal evaluation with 16 people and found that DigiTaps with no audio feedback was faster but less accurate than with audio feedback after every input. Throughout the study, participants entered numbers with no audio feedback at an average rate of 0.87 characters per second, with an uncorrected error rate of 5.63%.
无眼输入通常依赖于在嘈杂环境中很难听到的音频反馈。我们提出了DigiTaps,一种无需眼睛的数字输入方法,用于触摸屏设备,需要很少的听觉注意。要输入数字,用户可以用一根、两根或三根手指轻敲或滑动屏幕上的任何地方。这10个数字是由这些与数字语义相关的手势组合而成的。例如,数字2是用2指轻敲输入的。我们对16个人进行了纵向评估,发现没有音频反馈的DigiTaps比每次输入后都有音频反馈的DigiTaps更快,但准确性更低。在整个研究过程中,参与者在没有音频反馈的情况下以平均每秒0.87个字符的速度输入数字,未纠正的错误率为5.63%。
{"title":"DigiTaps: eyes-free number entry on touchscreens with minimal audio feedback","authors":"Shiri Azenkot, Cynthia L. Bennett, R. Ladner","doi":"10.1145/2501988.2502056","DOIUrl":"https://doi.org/10.1145/2501988.2502056","url":null,"abstract":"Eyes-free input usually relies on audio feedback that can be difficult to hear in noisy environments. We present DigiTaps, an eyes-free number entry method for touchscreen devices that requires little auditory attention. To enter a digit, users tap or swipe anywhere on the screen with one, two, or three fingers. The 10 digits are encoded by combinations of these gestures that relate to the digits' semantics. For example, the digit 2 is input with a 2-finger tap. We conducted a longitudinal evaluation with 16 people and found that DigiTaps with no audio feedback was faster but less accurate than with audio feedback after every input. Throughout the study, participants entered numbers with no audio feedback at an average rate of 0.87 characters per second, with an uncorrected error rate of 5.63%.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127630812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
期刊
Proceedings of the 26th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1