首页 > 最新文献

Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

英文 中文
Making the web easier to see with opportunistic accessibility improvement 通过机会性的可访问性改进,使网页更容易看到
Jeffrey P. Bigham
Many people would find the Web easier to use if content was a little bigger, even those who already find the Web possible to use now. This paper introduces the idea of opportunistic accessibility improvement in which improvements intended to make a web page easier to access, such as magnification, are automatically applied to the extent that they can be without causing negative side effects. We explore this idea with oppaccess.js, an easily-deployed system for magnifying web pages that iteratively increases magnification until it notices negative side effects, such as horizontal scrolling or overlapping text. We validate this approach by magnifying existing web pages 1.6x on average without introducing negative side effects. We believe this concept applies generally across a wide range of accessibility improvements designed to help people with diverse abilities.
如果内容再大一点,许多人会发现网络更容易使用,即使是那些已经发现网络现在可以使用的人。本文介绍了机会性可访问性改进的概念,其中旨在使网页更容易访问的改进,例如放大,会自动应用到不会造成负面副作用的程度。我们用oppaccess.js来探索这个想法,oppaccess.js是一个易于部署的放大网页的系统,它迭代地增加放大,直到它注意到负面的副作用,比如水平滚动或重叠的文本。我们通过在不引入负面副作用的情况下将现有网页平均放大1.6倍来验证这种方法。我们相信这一概念适用于广泛的可访问性改进,旨在帮助不同能力的人。
{"title":"Making the web easier to see with opportunistic accessibility improvement","authors":"Jeffrey P. Bigham","doi":"10.1145/2642918.2647357","DOIUrl":"https://doi.org/10.1145/2642918.2647357","url":null,"abstract":"Many people would find the Web easier to use if content was a little bigger, even those who already find the Web possible to use now. This paper introduces the idea of opportunistic accessibility improvement in which improvements intended to make a web page easier to access, such as magnification, are automatically applied to the extent that they can be without causing negative side effects. We explore this idea with oppaccess.js, an easily-deployed system for magnifying web pages that iteratively increases magnification until it notices negative side effects, such as horizontal scrolling or overlapping text. We validate this approach by magnifying existing web pages 1.6x on average without introducing negative side effects. We believe this concept applies generally across a wide range of accessibility improvements designed to help people with diverse abilities.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84474493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Detecting tapping motion on the side of mobile devices by probabilistically combining hand postures 通过概率组合手势来检测移动设备侧面的敲击动作
William McGrath, Yang Li
We contribute a novel method for detecting finger taps on the different sides of a smartphone, using the built-in motion sensors of the device. In particular, we discuss new features and algorithms that infer side taps by probabilistically combining estimates of tap location and the hand pose--the hand holding the device. Based on a dataset collected from 9 participants, our method achieved 97.3% precision and 98.4% recall on tap event detection against ambient motion. For detecting single-tap locations, our method outperformed an approach that uses inferred hand postures deterministically by 3% and an approach that does not use hand posture inference by 17%. For inferring the location of two consecutive side taps from the same direction, our method outperformed the two baseline approaches by 6% and 17% respectively. We discuss our insights into designing the detection algorithm and the implication on side tap-based interaction behaviors.
我们提供了一种新的方法来检测手指敲击智能手机的不同侧面,使用设备的内置运动传感器。特别是,我们讨论了新的特征和算法,通过概率结合对点击位置和手姿势的估计来推断侧击。基于9个参与者的数据集,我们的方法在针对环境运动的轻拍事件检测上达到了97.3%的准确率和98.4%的召回率。对于检测单次点击位置,我们的方法比确定使用推断手势的方法高出3%,比不使用推断手势的方法高出17%。对于从同一方向推断两个连续侧钻的位置,我们的方法分别比两种基线方法高出6%和17%。我们讨论了我们对设计检测算法的见解以及对基于侧击的交互行为的影响。
{"title":"Detecting tapping motion on the side of mobile devices by probabilistically combining hand postures","authors":"William McGrath, Yang Li","doi":"10.1145/2642918.2647363","DOIUrl":"https://doi.org/10.1145/2642918.2647363","url":null,"abstract":"We contribute a novel method for detecting finger taps on the different sides of a smartphone, using the built-in motion sensors of the device. In particular, we discuss new features and algorithms that infer side taps by probabilistically combining estimates of tap location and the hand pose--the hand holding the device. Based on a dataset collected from 9 participants, our method achieved 97.3% precision and 98.4% recall on tap event detection against ambient motion. For detecting single-tap locations, our method outperformed an approach that uses inferred hand postures deterministically by 3% and an approach that does not use hand posture inference by 17%. For inferring the location of two consecutive side taps from the same direction, our method outperformed the two baseline approaches by 6% and 17% respectively. We discuss our insights into designing the detection algorithm and the implication on side tap-based interaction behaviors.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82412953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Tag system with low-powered tag and depth sensing camera 由低功耗标签和深度感应相机组成的标签系统
H. Manabe, Wataru Yamada, H. Inamura
A tag system is proposed that offers a practical approach to ubiquitous computing. It provides small and low-power tags that are easy to distribute; does not need a special device to read the tags (in the future), thus enabling their use anytime, anywhere; and has a wide reading range in angle and distance that extends the design space of tag-based applications. The tag consists of a kind of liquid crystal (LC) and a retroreflector, and it sends its ID by switching the LC. A depth sensing camera that emits infrared (IR) is used as the tag reader; we assume that it will be part of the user's everyday devices, such as a smartphone. Experiments were conducted to confirm its potential, and a regular IR camera was also tested for comparison. The results show that the tag system has a wide readable range in terms of both distance (up to 8m) and viewing angle offset. Several applications were also developed to explore the design space. Finally, limitations of the current setup and possible improvements are discussed.
提出了一种标签系统,为普适计算提供了一种实用的方法。它提供了易于分发的小而低功耗的标签;不需要特殊的设备来读取标签(未来),从而使其随时随地使用;并且在角度和距离上具有广泛的读取范围,扩展了基于标签的应用的设计空间。该标签由一种液晶(LC)和一个反向反射器组成,通过切换液晶来发送自己的ID。使用发射红外线的深度感测相机作为标签读取器;我们假设它将成为用户日常设备的一部分,比如智能手机。实验证实了其潜力,并对普通红外相机进行了测试以进行比较。结果表明,该标签系统在距离(高达8m)和视角偏移方面具有较宽的可读范围。还开发了几个应用程序来探索设计空间。最后,讨论了当前设置的局限性和可能的改进。
{"title":"Tag system with low-powered tag and depth sensing camera","authors":"H. Manabe, Wataru Yamada, H. Inamura","doi":"10.1145/2642918.2647404","DOIUrl":"https://doi.org/10.1145/2642918.2647404","url":null,"abstract":"A tag system is proposed that offers a practical approach to ubiquitous computing. It provides small and low-power tags that are easy to distribute; does not need a special device to read the tags (in the future), thus enabling their use anytime, anywhere; and has a wide reading range in angle and distance that extends the design space of tag-based applications. The tag consists of a kind of liquid crystal (LC) and a retroreflector, and it sends its ID by switching the LC. A depth sensing camera that emits infrared (IR) is used as the tag reader; we assume that it will be part of the user's everyday devices, such as a smartphone. Experiments were conducted to confirm its potential, and a regular IR camera was also tested for comparison. The results show that the tag system has a wide readable range in terms of both distance (up to 8m) and viewing angle offset. Several applications were also developed to explore the design space. Finally, limitations of the current setup and possible improvements are discussed.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73325076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Data-driven interaction techniques for improving navigation of educational videos 改进教育视频导航的数据驱动交互技术
Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen Li, Krzysztof Z Gajos, Rob Miller
With an unprecedented scale of learners watching educational videos on online platforms such as MOOCs and YouTube, there is an opportunity to incorporate data generated from their interactions into the design of novel video interaction techniques. Interaction data has the potential to help not only instructors to improve their videos, but also to enrich the learning experience of educational video watchers. This paper explores the design space of data-driven interaction techniques for educational video navigation. We introduce a set of techniques that augment existing video interface widgets, including: a 2D video timeline with an embedded visualization of collective navigation traces; dynamic and non-linear timeline scrubbing; data-enhanced transcript search and keyword summary; automatic display of relevant still frames next to the video; and a visual summary representing points with high learner activity. To evaluate the feasibility of the techniques, we ran a laboratory user study with simulated learning tasks. Participants rated watching lecture videos with interaction data to be efficient and useful in completing the tasks. However, no significant differences were found in task performance, suggesting that interaction data may not always align with moment-by-moment information needs during the tasks.
随着在mooc和YouTube等在线平台上观看教育视频的学习者达到前所未有的规模,我们有机会将他们互动产生的数据整合到新型视频互动技术的设计中。互动数据不仅可以帮助教师改进他们的视频,还可以丰富教育视频观看者的学习经验。本文探讨了教育视频导航中数据驱动交互技术的设计空间。我们介绍了一组增强现有视频界面小部件的技术,包括:具有嵌入式可视化集体导航痕迹的2D视频时间轴;动态和非线性时间线擦洗;数据增强型文本搜索和关键词汇总;自动显示视频旁边的相关静止帧;和一个视觉总结,代表高学习者活动的点。为了评估这些技术的可行性,我们进行了一个模拟学习任务的实验室用户研究。参与者认为观看带有交互数据的讲座视频在完成任务时是有效和有用的。然而,在任务表现上没有发现显著差异,这表明交互数据可能并不总是与任务中每时每刻的信息需求一致。
{"title":"Data-driven interaction techniques for improving navigation of educational videos","authors":"Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen Li, Krzysztof Z Gajos, Rob Miller","doi":"10.1145/2642918.2647389","DOIUrl":"https://doi.org/10.1145/2642918.2647389","url":null,"abstract":"With an unprecedented scale of learners watching educational videos on online platforms such as MOOCs and YouTube, there is an opportunity to incorporate data generated from their interactions into the design of novel video interaction techniques. Interaction data has the potential to help not only instructors to improve their videos, but also to enrich the learning experience of educational video watchers. This paper explores the design space of data-driven interaction techniques for educational video navigation. We introduce a set of techniques that augment existing video interface widgets, including: a 2D video timeline with an embedded visualization of collective navigation traces; dynamic and non-linear timeline scrubbing; data-enhanced transcript search and keyword summary; automatic display of relevant still frames next to the video; and a visual summary representing points with high learner activity. To evaluate the feasibility of the techniques, we ran a laboratory user study with simulated learning tasks. Participants rated watching lecture videos with interaction data to be efficient and useful in completing the tasks. However, no significant differences were found in task performance, suggesting that interaction data may not always align with moment-by-moment information needs during the tasks.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"81 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82046809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 179
RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units RoomAlive:通过可扩展的自适应投影摄像机单元实现的神奇体验
Brett R. Jones, Rajinder Sodhi, Michael Murdock, Ravish Mehra, Hrvoje Benko, Andrew D. Wilson, E. Ofek, B. MacIntyre, N. Raghuvanshi, Lior Shapira
RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.
RoomAlive是一个概念验证原型,可以将任何房间变成身临其境的增强娱乐体验。我们的系统使新的交互式投影映射体验能够动态地适应任何房间的内容。用户可以触摸、射击、跺脚、躲避和操纵投影内容,这些内容与他们现有的物理环境无缝共存。RoomAlive的基本构建模块是投影仪深度的相机单元,它们可以通过可扩展的分布式框架组合在一起。投影仪深度相机单元可以单独自动校准,自我定位,并在没有用户干预的情况下创建房间的统一模型。我们研究了RoomAlive可能实现的游戏体验设计空间,并探索了基于房间布局和用户位置动态映射内容的方法。最后,我们展示了四个体验原型,展示了RoomAlive可能带来的新颖互动体验,并讨论了将任何游戏适应任何房间的设计挑战。
{"title":"RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units","authors":"Brett R. Jones, Rajinder Sodhi, Michael Murdock, Ravish Mehra, Hrvoje Benko, Andrew D. Wilson, E. Ofek, B. MacIntyre, N. Raghuvanshi, Lior Shapira","doi":"10.1145/2642918.2647383","DOIUrl":"https://doi.org/10.1145/2642918.2647383","url":null,"abstract":"RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85570474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 315
LightRing: always-available 2D input on any surface 光照:在任何表面上始终可用的2D输入
W. Kienzle, K. Hinckley
We present LightRing, a wearable sensor in a ring form factor that senses the 2d location of a fingertip on any surface, independent of orientation or material. The device consists of an infrared proximity sensor for measuring finger flexion and a 1-axis gyroscope for measuring finger rotation. Notably, LightRing tracks subtle fingertip movements from the finger base without requiring instrumentation of other body parts or the environment. This keeps the normal hand function intact and allows for a socially acceptable appearance. We evaluate LightRing in a 2d pointing experiment in two scenarios: on a desk while sitting down, and on the leg while standing. Our results indicate that the device has potential to enable a variety of rich mobile input scenarios.
我们提出了LightRing,一种环形可穿戴传感器,可以在任何表面上感知指尖的二维位置,而不受方向或材料的影响。该装置包括用于测量手指屈曲的红外接近传感器和用于测量手指旋转的1轴陀螺仪。值得注意的是,LightRing可以从手指根部追踪细微的指尖运动,而不需要其他身体部位或环境的仪器。这使正常的手功能完好无损,并允许社会接受的外观。我们在两种情况下评估LightRing 2d指向实验:坐着时在桌子上,站着时在腿上。我们的研究结果表明,该设备有潜力实现各种丰富的移动输入场景。
{"title":"LightRing: always-available 2D input on any surface","authors":"W. Kienzle, K. Hinckley","doi":"10.1145/2642918.2647376","DOIUrl":"https://doi.org/10.1145/2642918.2647376","url":null,"abstract":"We present LightRing, a wearable sensor in a ring form factor that senses the 2d location of a fingertip on any surface, independent of orientation or material. The device consists of an infrared proximity sensor for measuring finger flexion and a 1-axis gyroscope for measuring finger rotation. Notably, LightRing tracks subtle fingertip movements from the finger base without requiring instrumentation of other body parts or the environment. This keeps the normal hand function intact and allows for a socially acceptable appearance. We evaluate LightRing in a 2d pointing experiment in two scenarios: on a desk while sitting down, and on the leg while standing. Our results indicate that the device has potential to enable a variety of rich mobile input scenarios.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78435060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 88
Crowd-powered parameter analysis for visual design exploration 群体动力参数分析用于视觉设计探索
Yuki Koyama, Daisuke Sakamoto, T. Igarashi
Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color or executing blendshape facial expression control. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. We present a new technique to analyze such high-dimensional parameter space to obtain a distribution of human preference. Our method uses crowdsourcing to gather pairwise comparisons between various parameter sets. As a result of analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We created four applications with different design parameter spaces. As a result, the system could facilitate the user's design exploration.
参数调整是视觉数字内容编辑的基本任务之一,如校正照片颜色或执行混合型面部表情控制。参数调整的一个问题是,通常需要花费大量时间和精力来探索高维参数空间。我们提出了一种新的方法来分析这种高维参数空间,以获得人类偏好的分布。我们的方法使用众包来收集不同参数集之间的两两比较。作为分析的结果,用户得到一个优度函数,该函数计算给定参数集的优度值。这个良度函数支持两个界面进行探索:Smart Suggestion,它提供了优选参数集的建议;VisOpt Slider,它可以交互式地可视化滑块上良度值的分布,并在用户编辑时轻轻地优化滑块值。我们创建了四个具有不同设计参数空间的应用程序。因此,该系统可以方便用户的设计探索。
{"title":"Crowd-powered parameter analysis for visual design exploration","authors":"Yuki Koyama, Daisuke Sakamoto, T. Igarashi","doi":"10.1145/2642918.2647386","DOIUrl":"https://doi.org/10.1145/2642918.2647386","url":null,"abstract":"Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color or executing blendshape facial expression control. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. We present a new technique to analyze such high-dimensional parameter space to obtain a distribution of human preference. Our method uses crowdsourcing to gather pairwise comparisons between various parameter sets. As a result of analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We created four applications with different design parameter spaces. As a result, the system could facilitate the user's design exploration.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86929533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
High rate, low-latency multi-touch sensing with simultaneous orthogonal multiplexing 同时正交多路复用的高速率,低延迟多点触摸传感
D. Leigh, C. Forlines, Ricardo Jota, Steven Sanders, Daniel J. Wigdor
We present "Fast Multi-Touch" (FMT), an extremely high frame rate and low-latency multi-touch sensor based on a novel projected capacitive architecture that employs simultaneous orthogonal signals. The sensor has a frame rate of 4000 Hz and a touch-to-data output latency of only 40 microseconds, providing unprecedented responsiveness. FMT is demonstrated with a high-speed DLP projector yielding a touch-to-light latency of 110 microseconds.
我们提出了“快速多点触摸”(FMT),这是一种基于新型投影电容结构的超高帧率和低延迟多点触摸传感器,采用同步正交信号。该传感器的帧率为4000 Hz,触摸到数据的输出延迟仅为40微秒,提供前所未有的响应能力。FMT用高速DLP投影仪进行演示,产生110微秒的触光延迟。
{"title":"High rate, low-latency multi-touch sensing with simultaneous orthogonal multiplexing","authors":"D. Leigh, C. Forlines, Ricardo Jota, Steven Sanders, Daniel J. Wigdor","doi":"10.1145/2642918.2647353","DOIUrl":"https://doi.org/10.1145/2642918.2647353","url":null,"abstract":"We present \"Fast Multi-Touch\" (FMT), an extremely high frame rate and low-latency multi-touch sensor based on a novel projected capacitive architecture that employs simultaneous orthogonal signals. The sensor has a frame rate of 4000 Hz and a touch-to-data output latency of only 40 microseconds, providing unprecedented responsiveness. FMT is demonstrated with a high-speed DLP projector yielding a touch-to-light latency of 110 microseconds.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"26 11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82698551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Tohme: detecting curb ramps in google street view using crowdsourcing, computer vision, and machine learning Tohme:在谷歌街景中使用众包、计算机视觉和机器学习来检测路边坡道
Kotaro Hara, J. Sun, Robert Moore, D. Jacobs, Jon E. Froehlich
Building on recent prior work that combines Google Street View (GSV) and crowdsourcing to remotely collect information on physical world accessibility, we present the first 'smart' system, Tohme, that combines machine learning, computer vision (CV), and custom crowd interfaces to find curb ramps remotely in GSV scenes. Tohme consists of two workflows, a human labeling pipeline and a CV pipeline with human verification, which are scheduled dynamically based on predicted performance. Using 1,086 GSV scenes (street intersections) from four North American cities and data from 403 crowd workers, we show that Tohme performs similarly in detecting curb ramps compared to a manual labeling approach alone (F- measure: 84% vs. 86% baseline) but at a 13% reduction in time cost. Our work contributes the first CV-based curb ramp detection system, a custom machine-learning based workflow controller, a validation of GSV as a viable curb ramp data source, and a detailed examination of why curb ramp detection is a hard problem along with steps forward.
在最近结合谷歌街景(GSV)和众包来远程收集物理世界可达性信息的工作的基础上,我们提出了第一个“智能”系统Tohme,它结合了机器学习、计算机视觉(CV)和自定义人群界面,可以在GSV场景中远程找到路缘坡道。Tohme由两个工作流组成,一个人工标记管道和一个人工验证的CV管道,它们是根据预测的性能动态调度的。使用来自四个北美城市的1,086个GSV场景(街道路口)和403名人群工作人员的数据,我们发现Tohme在检测路边坡道方面的表现与单独的手动标记方法相似(F-测量值:84%对86%基线),但时间成本降低了13%。我们的工作贡献了第一个基于cv的路缘匝道检测系统,一个基于定制机器学习的工作流控制器,验证了GSV作为可行的路缘匝道数据源,并详细检查了为什么路缘匝道检测是一个难题。
{"title":"Tohme: detecting curb ramps in google street view using crowdsourcing, computer vision, and machine learning","authors":"Kotaro Hara, J. Sun, Robert Moore, D. Jacobs, Jon E. Froehlich","doi":"10.1145/2642918.2647403","DOIUrl":"https://doi.org/10.1145/2642918.2647403","url":null,"abstract":"Building on recent prior work that combines Google Street View (GSV) and crowdsourcing to remotely collect information on physical world accessibility, we present the first 'smart' system, Tohme, that combines machine learning, computer vision (CV), and custom crowd interfaces to find curb ramps remotely in GSV scenes. Tohme consists of two workflows, a human labeling pipeline and a CV pipeline with human verification, which are scheduled dynamically based on predicted performance. Using 1,086 GSV scenes (street intersections) from four North American cities and data from 403 crowd workers, we show that Tohme performs similarly in detecting curb ramps compared to a manual labeling approach alone (F- measure: 84% vs. 86% baseline) but at a 13% reduction in time cost. Our work contributes the first CV-based curb ramp detection system, a custom machine-learning based workflow controller, a validation of GSV as a viable curb ramp data source, and a detailed examination of why curb ramp detection is a hard problem along with steps forward.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89124134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
PrintScreen: fabricating highly customizable thin-film touch-displays PrintScreen:制造高度可定制的薄膜触摸显示器
Simon Olberding, Michael Wessely, Jürgen Steimle
PrintScreen is an enabling technology for digital fabrication of customized flexible displays using thin-film electroluminescence (TFEL). It enables inexpensive and rapid fabrication of highly customized displays in low volume, in a simple lab environment, print shop or even at home. We show how to print ultra-thin (120 µm) segmented and passive matrix displays in greyscale or multi-color on a variety of deformable and rigid substrate materials, including PET film, office paper, leather, metal, stone, and wood. The displays can have custom, unconventional 2D shapes and can be bent, rolled and folded to create 3D shapes. We contribute a systematic overview of graphical display primitives for customized displays and show how to integrate them with static print and printed electronics. Furthermore, we contribute a sensing framework, which leverages the display itself for touch sensing. To demonstrate the wide applicability of PrintScreen, we present application examples from ubiquitous, mobile and wearable computing.
PrintScreen是一种利用薄膜电致发光(TFEL)技术定制柔性显示器的数字化制造技术。它可以在简单的实验室环境,打印店甚至在家中以低批量廉价和快速地制造高度定制的显示器。我们展示了如何在各种可变形和刚性基板材料上打印超薄(120微米)灰度或多色分段和被动矩阵显示器,包括PET薄膜,办公纸,皮革,金属,石头和木材。这些显示器可以有定制的、非传统的2D形状,还可以弯曲、卷起和折叠,形成3D形状。我们为定制显示提供了图形显示原语的系统概述,并展示了如何将它们与静态打印和印刷电子产品集成。此外,我们提供了一个传感框架,它利用显示器本身进行触摸传感。为了证明PrintScreen的广泛适用性,我们给出了无处不在、移动和可穿戴计算的应用示例。
{"title":"PrintScreen: fabricating highly customizable thin-film touch-displays","authors":"Simon Olberding, Michael Wessely, Jürgen Steimle","doi":"10.1145/2642918.2647413","DOIUrl":"https://doi.org/10.1145/2642918.2647413","url":null,"abstract":"PrintScreen is an enabling technology for digital fabrication of customized flexible displays using thin-film electroluminescence (TFEL). It enables inexpensive and rapid fabrication of highly customized displays in low volume, in a simple lab environment, print shop or even at home. We show how to print ultra-thin (120 µm) segmented and passive matrix displays in greyscale or multi-color on a variety of deformable and rigid substrate materials, including PET film, office paper, leather, metal, stone, and wood. The displays can have custom, unconventional 2D shapes and can be bent, rolled and folded to create 3D shapes. We contribute a systematic overview of graphical display primitives for customized displays and show how to integrate them with static print and printed electronics. Furthermore, we contribute a sensing framework, which leverages the display itself for touch sensing. To demonstrate the wide applicability of PrintScreen, we present application examples from ubiquitous, mobile and wearable computing.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77277066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 151
期刊
Proceedings of the 27th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1