首页 > 最新文献

Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Detecting student frustration based on handwriting behavior 通过书写行为检测学生的挫败感
H. Asai, H. Yamana
Detecting states of frustration among students engaged in learning activities is critical to the success of teaching assistance tools. We examine the relationship between a student's pen activity and his/her state of frustration while solving handwritten problems. Based on a user study involving mathematics problems, we found that our detection method was able to detect student frustration with a precision of 87% and a recall of 90%. We also identified several particularly discriminative features, including writing stroke number, erased stroke number, pen activity time, and air stroke speed.
检测学生在学习活动中的沮丧状态对教学辅助工具的成功至关重要。我们研究了学生的笔活动和他/她在解决手写问题时的沮丧状态之间的关系。基于一个涉及数学问题的用户研究,我们发现我们的检测方法能够以87%的准确率和90%的召回率检测学生的挫败感。我们还确定了几个特别具有区别性的特征,包括书写笔划数、擦除笔划数、笔活动时间和空气笔划速度。
{"title":"Detecting student frustration based on handwriting behavior","authors":"H. Asai, H. Yamana","doi":"10.1145/2508468.2514718","DOIUrl":"https://doi.org/10.1145/2508468.2514718","url":null,"abstract":"Detecting states of frustration among students engaged in learning activities is critical to the success of teaching assistance tools. We examine the relationship between a student's pen activity and his/her state of frustration while solving handwritten problems. Based on a user study involving mathematics problems, we found that our detection method was able to detect student frustration with a precision of 87% and a recall of 90%. We also identified several particularly discriminative features, including writing stroke number, erased stroke number, pen activity time, and air stroke speed.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115228063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Flexkit: a rapid prototyping platform for flexible displays Flexkit:用于柔性显示器的快速原型设计平台
David Holman, Jesse Burstyn, R. Brotman, A. Younkin, Roel Vertegaal
Commercially available development platforms for flexible displays are not designed for rapid prototyping. To create a deformable interface, one that uses a functional flexible display, designers must be familiar with embedded hardware systems and corresponding programming. We introduce Flexkit, a platform that allows designers to rapidly prototype deformable applications. With Flexkit, designers can rapidly prototype using a thin-film electrophoretic display, one that is "Plug and Play". To demonstrate Flexkit's ease-of-use, we present its application in PaperTab's design iteration as a case study. We further discuss how dithering can be used to increase the frame rate of electrophoretic displays from 1fps to 5fps.
商业上可用的柔性显示器开发平台不是为快速原型设计的。要创建一个可变形的界面,一个使用功能灵活的显示,设计师必须熟悉嵌入式硬件系统和相应的编程。我们介绍Flexkit,一个允许设计师快速创建可变形应用程序原型的平台。使用Flexkit,设计人员可以使用薄膜电泳显示器快速制作原型,即插即用。为了演示Flexkit的易用性,我们将其作为案例研究在PaperTab的设计迭代中的应用。我们进一步讨论了如何使用抖动将电泳显示器的帧率从1fps提高到5fps。
{"title":"Flexkit: a rapid prototyping platform for flexible displays","authors":"David Holman, Jesse Burstyn, R. Brotman, A. Younkin, Roel Vertegaal","doi":"10.1145/2508468.2514934","DOIUrl":"https://doi.org/10.1145/2508468.2514934","url":null,"abstract":"Commercially available development platforms for flexible displays are not designed for rapid prototyping. To create a deformable interface, one that uses a functional flexible display, designers must be familiar with embedded hardware systems and corresponding programming. We introduce Flexkit, a platform that allows designers to rapidly prototype deformable applications. With Flexkit, designers can rapidly prototype using a thin-film electrophoretic display, one that is \"Plug and Play\". To demonstrate Flexkit's ease-of-use, we present its application in PaperTab's design iteration as a case study. We further discuss how dithering can be used to increase the frame rate of electrophoretic displays from 1fps to 5fps.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"32 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115346546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
QOOK: a new physical-virtual coupling experience for active reading 主动阅读的一种新的物理-虚拟耦合体验
Yuhang Zhao, Yongqiang Qin, Yang Liu, Siqi Liu, Yuanchun Shi
We present QOOK, an interactive reading system that incorporates the benefits of both physical and digital books to facilitate active reading. QOOK uses a top-projector to create digital contents on a blank paper book. By detecting markers attached to each page, QOOK allows users to flip pages just like they would with a real book. Electronic functions such as keyword searching, highlighting and bookmarking are included to provide users with additional digital assistance. With a Kinect sensor that recognizes touch gestures, QOOK enables people to use these electronic functions directly with their fingers. The combination of the electronic functions of the virtual interface and free-form interaction with the physical book creates a natural reading experience, providing an opportunity for faster navigation between pages and better understanding of the book contents.
我们提出的QOOK,一个互动阅读系统,结合了实体和数字图书的好处,以促进主动阅读。QOOK使用顶部投影仪在空白纸质书上创建数字内容。通过检测附在每一页上的标记,QOOK允许用户像翻阅真正的书一样翻页。电子功能,如关键词搜索、高亮显示和书签,为用户提供额外的数码协助。QOOK配备了识别触摸手势的Kinect传感器,使人们能够直接用手指使用这些电子功能。虚拟界面的电子功能和与实体书的自由形式互动相结合,创造了一种自然的阅读体验,为页面之间的快速导航和更好地理解图书内容提供了机会。
{"title":"QOOK: a new physical-virtual coupling experience for active reading","authors":"Yuhang Zhao, Yongqiang Qin, Yang Liu, Siqi Liu, Yuanchun Shi","doi":"10.1145/2508468.2514928","DOIUrl":"https://doi.org/10.1145/2508468.2514928","url":null,"abstract":"We present QOOK, an interactive reading system that incorporates the benefits of both physical and digital books to facilitate active reading. QOOK uses a top-projector to create digital contents on a blank paper book. By detecting markers attached to each page, QOOK allows users to flip pages just like they would with a real book. Electronic functions such as keyword searching, highlighting and bookmarking are included to provide users with additional digital assistance. With a Kinect sensor that recognizes touch gestures, QOOK enables people to use these electronic functions directly with their fingers. The combination of the electronic functions of the virtual interface and free-form interaction with the physical book creates a natural reading experience, providing an opportunity for faster navigation between pages and better understanding of the book contents.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116733287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology 第26届ACM用户界面软件与技术年会附文集
S. Izadi, A. Quigley, I. Poupyrev, T. Igarashi
It is our pleasure to welcome you to the 26th Annual ACM Symposium on User Interface Software and Technology (UIST) 2013, held from October 8-11th, in the historic town and University of St Andrews, Scotland, United Kingdom. UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, new input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, and computer supported cooperative work. The single-track program and intimate size, makes UIST 2013 an ideal place to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations. We received a record 317 paper submissions from more than 30 countries. After a thorough review process, the program committee accepted 62 papers (19.5%). Each anonymous submission was first reviewed by three external reviewers, and meta-reviews were provided by two program committee members. If any of the five reviewers deemed a submission to pass a rejection threshold we asked the authors to submit a short rebuttal addressing the reviewers' concerns. The program committee met in person in Pittsburgh, PA, on May 30-31, 2013, to select the papers for the conference. Submissions were finally accepted only after the authors provided a final revision addressing the committee's comments. In addition to the presentations of accepted papers, this year's program includes a keynote by Raffaello D'Andrea (ETH Zurich) on feedback control systems for autonomous machines. A great line up of posters, demos, (the ninth) annual Doctoral Symposium, and (the fifth) annual Student Innovation Contest (this year focusing on programmable water pumps called Pumpspark) complete the program. We hope you enjoy all aspects of the UIST 2013 program, and that you get to enjoy our wonderful venues and that your discussions and interactions prove fruitful.
我们很高兴欢迎您参加2013年10月8日至11日在英国苏格兰历史悠久的圣安德鲁斯大学举行的第26届ACM用户界面软件与技术年度研讨会。UIST是展示软件和人机界面技术研究创新的主要论坛。UIST由ACM计算机人机交互(SIGCHI)和计算机图形学(SIGGRAPH)特别兴趣小组赞助,汇集了来自许多领域的研究人员和从业人员,包括网络和图形界面,新的输入和输出设备,信息可视化,传感技术,交互式显示,桌面和有形计算,交互技术,增强和虚拟现实,泛在计算和计算机支持的协同工作。单轨计划和亲密的规模,使UIST 2013成为交流用户界面研究前沿成果的理想场所,会见朋友和同事,并建立未来的合作。我们收到了创纪录的来自30多个国家的317篇论文。经过全面的审查,计划委员会接受了62篇论文(19.5%)。每个匿名提交首先由三名外部审稿人审查,然后由两名项目委员会成员提供元审查。如果五位审稿人中的任何一位认为提交的论文通过了拒绝门槛,我们会要求作者提交一份简短的反驳,以解决审稿人所关注的问题。项目委员会于2013年5月30日至31日在宾夕法尼亚州匹兹堡亲自开会,为会议选择论文。只有在作者提供了针对委员会意见的最终修订后,提交的材料才最终被接受。除了被接受的论文的介绍,今年的计划还包括Raffaello D'Andrea(苏黎世联邦理工学院)关于自主机器反馈控制系统的主题演讲。一系列的海报、演示、(第九届)年度博士研讨会和(第五届)年度学生创新竞赛(今年的重点是名为Pumpspark的可编程水泵)完成了这个项目。我们希望你们喜欢UIST 2013项目的各个方面,希望你们能享受我们精彩的场地,希望你们的讨论和互动富有成果。
{"title":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","authors":"S. Izadi, A. Quigley, I. Poupyrev, T. Igarashi","doi":"10.1145/2508468","DOIUrl":"https://doi.org/10.1145/2508468","url":null,"abstract":"It is our pleasure to welcome you to the 26th Annual ACM Symposium on User Interface Software and Technology (UIST) 2013, held from October 8-11th, in the historic town and University of St Andrews, Scotland, United Kingdom. \u0000 \u0000UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, new input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, and computer supported cooperative work. The single-track program and intimate size, makes UIST 2013 an ideal place to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations. \u0000 \u0000We received a record 317 paper submissions from more than 30 countries. After a thorough review process, the program committee accepted 62 papers (19.5%). Each anonymous submission was first reviewed by three external reviewers, and meta-reviews were provided by two program committee members. If any of the five reviewers deemed a submission to pass a rejection threshold we asked the authors to submit a short rebuttal addressing the reviewers' concerns. The program committee met in person in Pittsburgh, PA, on May 30-31, 2013, to select the papers for the conference. Submissions were finally accepted only after the authors provided a final revision addressing the committee's comments. \u0000 \u0000In addition to the presentations of accepted papers, this year's program includes a keynote by Raffaello D'Andrea (ETH Zurich) on feedback control systems for autonomous machines. A great line up of posters, demos, (the ninth) annual Doctoral Symposium, and (the fifth) annual Student Innovation Contest (this year focusing on programmable water pumps called Pumpspark) complete the program. We hope you enjoy all aspects of the UIST 2013 program, and that you get to enjoy our wonderful venues and that your discussions and interactions prove fruitful.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128831516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Identifying emergent behaviours from longitudinal web use 从纵向网络使用中识别紧急行为
Aitor Apaolaza
Laboratory studies present difficulties in the understanding of how usage evolves over time. Employed observations are obtrusive and not naturalistic. Our system employs a remote capture tool that provides longitudinal low-level interaction data. It is easily deployable into any Web site allowing deployments in-the-wild and is completely unobtrusive. Web application interfaces are designed assuming users' goals. Requirement specifications contain well defined use cases and scenarios that drive design and subsequent optimisations. Users' interaction patterns outside the expected ones are not considered. This results in an optimisation for a stylised user rather than a real one. A bottom-up analysis from low-level interaction data makes possible the emergence of users' tasks. Similarities among users can be found and solutions that are effective for real users can be designed. Factors such as learnability and how interface changes affect users are difficult to observe in laboratory studies. Our solution makes it possible, adding a longitudinal point of view to traditional laboratory studies. The capture tool is deployed in real world Web applications capturing in-situ data from users. These data serve to explore analysis and visualisation possibilities. We present an example of the exploration results with one Web application.
实验室研究在理解用法如何随时间演变方面存在困难。所使用的观察是突兀的,不自然的。我们的系统采用远程捕获工具,提供纵向低层次交互数据。它很容易部署到任何允许野外部署的Web站点,并且完全不引人注目。Web应用程序接口的设计假定了用户的目标。需求规范包含定义良好的用例和场景,这些用例和场景驱动设计和随后的优化。不考虑预期之外的用户交互模式。这样做的结果是针对程式化的用户而不是真正的用户进行优化。从底层交互数据进行的自底向上的分析使得用户任务的出现成为可能。可以发现用户之间的相似之处,并设计出对实际用户有效的解决方案。诸如易学性和界面变化如何影响用户等因素在实验室研究中很难观察到。我们的解决方案使之成为可能,在传统的实验室研究中增加了纵向的观点。捕获工具部署在真实世界的Web应用程序中,从用户捕获原位数据。这些数据用于探索分析和可视化的可能性。我们给出了一个Web应用程序的搜索结果示例。
{"title":"Identifying emergent behaviours from longitudinal web use","authors":"Aitor Apaolaza","doi":"10.1145/2508468.2508475","DOIUrl":"https://doi.org/10.1145/2508468.2508475","url":null,"abstract":"Laboratory studies present difficulties in the understanding of how usage evolves over time. Employed observations are obtrusive and not naturalistic. Our system employs a remote capture tool that provides longitudinal low-level interaction data. It is easily deployable into any Web site allowing deployments in-the-wild and is completely unobtrusive. Web application interfaces are designed assuming users' goals. Requirement specifications contain well defined use cases and scenarios that drive design and subsequent optimisations. Users' interaction patterns outside the expected ones are not considered. This results in an optimisation for a stylised user rather than a real one. A bottom-up analysis from low-level interaction data makes possible the emergence of users' tasks. Similarities among users can be found and solutions that are effective for real users can be designed. Factors such as learnability and how interface changes affect users are difficult to observe in laboratory studies. Our solution makes it possible, adding a longitudinal point of view to traditional laboratory studies. The capture tool is deployed in real world Web applications capturing in-situ data from users. These data serve to explore analysis and visualisation possibilities. We present an example of the exploration results with one Web application.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127087853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
BackTap: robust four-point tapping on the back of an off-the-shelf smartphone BackTap:在现成的智能手机背面进行有力的四点敲击
Cheng Zhang, Aman Parnami, Caleb Southern, Edison Thomaz, Gabriel Reyes, R. Arriaga, G. Abowd
We present BackTap, an interaction technique that extends the input modality of a smartphone to add four distinct tap locations on the back case of a smartphone. The BackTap interaction can be used eyes-free with the phone in a user's pocket, purse, or armband while walking, or while holding the phone with two hands so as not to occlude the screen with the fingers. We employ three common built-in sensors on the smartphone (microphone, gyroscope, and accelerometer) and feature a lightweight heuristic implementation. In an evaluation with eleven participants and three usage conditions, users were able to tap four distinct points with 92% to 96% accuracy.
我们介绍了BackTap,一种扩展智能手机输入模式的交互技术,在智能手机的后盖上添加了四个不同的点击位置。BackTap互动可以在走路时将手机放在口袋、钱包或臂章中,或者用两只手拿着手机,以免手指遮挡屏幕。我们在智能手机上使用了三种常见的内置传感器(麦克风、陀螺仪和加速度计),并采用了轻量级的启发式实现。在11名参与者和3种使用条件下的评估中,用户能够以92%到96%的准确率点击4个不同的点。
{"title":"BackTap: robust four-point tapping on the back of an off-the-shelf smartphone","authors":"Cheng Zhang, Aman Parnami, Caleb Southern, Edison Thomaz, Gabriel Reyes, R. Arriaga, G. Abowd","doi":"10.1145/2508468.2514735","DOIUrl":"https://doi.org/10.1145/2508468.2514735","url":null,"abstract":"We present BackTap, an interaction technique that extends the input modality of a smartphone to add four distinct tap locations on the back case of a smartphone. The BackTap interaction can be used eyes-free with the phone in a user's pocket, purse, or armband while walking, or while holding the phone with two hands so as not to occlude the screen with the fingers. We employ three common built-in sensors on the smartphone (microphone, gyroscope, and accelerometer) and feature a lightweight heuristic implementation. In an evaluation with eleven participants and three usage conditions, users were able to tap four distinct points with 92% to 96% accuracy.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121546367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Enabling an ecosystem of personal behavioral data 建立个人行为数据生态系统
Jason Wiese
Almost every computational system a person interacts with keeps a detailed log of that person's behavior. The possibility of this data promises a breadth of new service opportunities for improving people's lives through deep personalization, tools to manage aspects of their personal wellbeing, and services that support identity construction. However, the way that this data is collected and managed today introduces several challenges that severely limit the utility of this rich data. This thesis maps out a computational ecosystem for personal behavioral data through the design, implementation, and evaluation of Phenom, a web service that factors out common activities in making inferences from personal behavioral data. The primary benefits of Phenom include: a structured process for aggregating and representing user data; support for developing models based on personal behavioral data; and a unified API for accessing inferences made by models within Phenom. To evaluate Phenom for ease of use and versatility, an external set of developers will create example applications with it.
几乎每个与人交互的计算系统都会详细记录这个人的行为。这些数据的可能性为通过深度个性化、管理个人福祉的工具以及支持身份构建的服务来改善人们的生活提供了广泛的新服务机会。然而,目前收集和管理这些数据的方式带来了一些挑战,严重限制了这些丰富数据的效用。本文通过设计、实现和评估Phenom(一个从个人行为数据中推断出常见活动的web服务),为个人行为数据绘制了一个计算生态系统。Phenom的主要优点包括:用于聚合和表示用户数据的结构化过程;支持基于个人行为数据的模型开发;以及一个统一的API,用于访问由Phenom内的模型做出的推断。为了评估Phenom的易用性和多功能性,一组外部开发人员将使用它创建示例应用程序。
{"title":"Enabling an ecosystem of personal behavioral data","authors":"Jason Wiese","doi":"10.1145/2508468.2508472","DOIUrl":"https://doi.org/10.1145/2508468.2508472","url":null,"abstract":"Almost every computational system a person interacts with keeps a detailed log of that person's behavior. The possibility of this data promises a breadth of new service opportunities for improving people's lives through deep personalization, tools to manage aspects of their personal wellbeing, and services that support identity construction. However, the way that this data is collected and managed today introduces several challenges that severely limit the utility of this rich data. This thesis maps out a computational ecosystem for personal behavioral data through the design, implementation, and evaluation of Phenom, a web service that factors out common activities in making inferences from personal behavioral data. The primary benefits of Phenom include: a structured process for aggregating and representing user data; support for developing models based on personal behavioral data; and a unified API for accessing inferences made by models within Phenom. To evaluate Phenom for ease of use and versatility, an external set of developers will create example applications with it.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127840900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A touchless passive infrared gesture sensor 一种非接触式被动红外手势传感器
Piotr Wojtczuk, T. David Binnie, A. Armitage, T. Chamberlain, C. Giebeler
A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 μW) enables the sensor's use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories -- up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.
提出了一种基于廉价被动红外热释电探测器阵列的非接触式手势用户界面传感装置。2 × 2元件的传感器响应由手在阵列上移动产生的红外辐射变化。感应范围从几毫米到几十厘米。低功耗(< 50 μW)使传感器能够在移动设备和低能耗应用中使用。通过一个区分四种主要手部运动轨迹(上、下、左、右)的原型系统,检测率达到了77%。该设备允许更大的非接触控制能力,而不会增加现有开关设备的尺寸,成本或功耗。
{"title":"A touchless passive infrared gesture sensor","authors":"Piotr Wojtczuk, T. David Binnie, A. Armitage, T. Chamberlain, C. Giebeler","doi":"10.1145/2508468.2514713","DOIUrl":"https://doi.org/10.1145/2508468.2514713","url":null,"abstract":"A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 μW) enables the sensor's use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories -- up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"5 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116661604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A cluster information navigate method by gaze tracking 一种基于注视跟踪的聚类信息导航方法
Dawei Cheng, Danqiong Li, Liang Fang
According to the rapid growth of data volume, it's increasingly complicated to present and navigate large amount of data in a convenient method on mobile devices with a small screen. To address this challenge, we present a new method which displays cluster information in a hierarchy pattern and interact with them by eyes' movement captured by the front camera of mobile devices. The key of this system is providing users a new interacting method to navigate and select data quickly by eyes without any additional equipment.
随着数据量的快速增长,在小屏幕的移动设备上以方便的方式呈现和浏览大量数据变得越来越复杂。为了解决这一挑战,我们提出了一种新的方法,该方法以层次模式显示集群信息,并通过移动设备前置摄像头捕获的眼睛运动与它们进行交互。该系统的关键是为用户提供一种新的交互方式,使用户无需任何额外设备即可通过眼睛快速导航和选择数据。
{"title":"A cluster information navigate method by gaze tracking","authors":"Dawei Cheng, Danqiong Li, Liang Fang","doi":"10.1145/2508468.2514710","DOIUrl":"https://doi.org/10.1145/2508468.2514710","url":null,"abstract":"According to the rapid growth of data volume, it's increasingly complicated to present and navigate large amount of data in a convenient method on mobile devices with a small screen. To address this challenge, we present a new method which displays cluster information in a hierarchy pattern and interact with them by eyes' movement captured by the front camera of mobile devices. The key of this system is providing users a new interacting method to navigate and select data quickly by eyes without any additional equipment.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123253142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sensor design and interaction techniques for gestural input to smart glasses and mobile devices 智能眼镜和移动设备手势输入的传感器设计和交互技术
Andrea Colaco
Touchscreen interfaces for small display devices have several limitations: the act of touching the screen occludes the display, interface elements like keyboards consume precious display real estate, and even simple tasks like document navigation - which the user performs effortlessly using a mouse and keyboard - require repeated actions like pinch-and-zoom with touch input. More recently, smart glasses with limited or no touch input are starting to emerge commercially. However, the primary input to these systems has been voice. In this paper, we explore the space around the device as a means of touchless gestural input to devices with small or no displays. Capturing gestural input in the surrounding volume requires sensing the human hand. To achieve gestural input we have built Mime [3] -- a compact, low-power 3D sensor for short-range gestural control of small display devices. Our sensor is based on a novel signal processing pipeline and is built using standard off-the-shelf components. Using Mime we demonstrated a variety of application scenarios including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. In my thesis, I will continue to extend sensor capabilities to support new interaction styles.
用于小型显示设备的触摸屏界面有几个限制:触摸屏幕的行为遮挡了显示,像键盘这样的界面元素消耗了宝贵的显示空间,甚至像文档导航这样的简单任务——用户使用鼠标和键盘毫不费力地完成——也需要通过触摸输入来重复操作,比如缩放。最近,限制或没有触摸输入的智能眼镜开始商业化。然而,这些系统的主要输入是语音。在本文中,我们探索了设备周围的空间,作为小型或无显示器设备的非接触式手势输入手段。捕捉周围空间的手势输入需要感应人手。为了实现手势输入,我们构建了Mime[3]——一种紧凑的低功耗3D传感器,用于小型显示设备的短距离手势控制。我们的传感器基于一种新颖的信号处理管道,并使用标准的现成组件构建。我们使用Mime演示了各种应用场景,包括使用近距离手势的3D空间输入、游戏、移动交互以及在混乱环境和光天化日条件下的操作。在我的论文中,我将继续扩展传感器功能以支持新的交互样式。
{"title":"Sensor design and interaction techniques for gestural input to smart glasses and mobile devices","authors":"Andrea Colaco","doi":"10.1145/2508468.2508474","DOIUrl":"https://doi.org/10.1145/2508468.2508474","url":null,"abstract":"Touchscreen interfaces for small display devices have several limitations: the act of touching the screen occludes the display, interface elements like keyboards consume precious display real estate, and even simple tasks like document navigation - which the user performs effortlessly using a mouse and keyboard - require repeated actions like pinch-and-zoom with touch input. More recently, smart glasses with limited or no touch input are starting to emerge commercially. However, the primary input to these systems has been voice. In this paper, we explore the space around the device as a means of touchless gestural input to devices with small or no displays. Capturing gestural input in the surrounding volume requires sensing the human hand. To achieve gestural input we have built Mime [3] -- a compact, low-power 3D sensor for short-range gestural control of small display devices. Our sensor is based on a novel signal processing pipeline and is built using standard off-the-shelf components. Using Mime we demonstrated a variety of application scenarios including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions. In my thesis, I will continue to extend sensor capabilities to support new interaction styles.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130698568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1