首页 > 最新文献

Proceedings of the 27th annual ACM symposium on User interface software and technology最新文献

英文 中文
PortraitSketch: face sketching assistance for novices PortraitSketch:为新手提供面部素描帮助
Jun Xie, Aaron Hertzmann, Wilmot Li, H. Winnemöller
We present PortraitSketch, an interactive drawing system that helps novices create pleasing, recognizable face sketches without requiring prior artistic training. As the user traces over a source portrait photograph, PortraitSketch automatically adjusts the geometry and stroke parameters (thickness, opacity, etc.) to improve the aesthetic quality of the sketch. We present algorithms for adjusting both outlines and shading strokes based on important features of the underlying source image. In contrast to automatic stylization systems, PortraitSketch is designed to encourage a sense of ownership and accomplishment in the user. To this end, all adjustments are performed in real-time, and the user ends up directly drawing all strokes on the canvas. The findings from our user study suggest that users prefer drawing with some automatic assistance, thereby producing better drawings, and that assistance does not decrease the perceived level of involvement in the creative process.
我们提出的肖像素描,一个互动的绘图系统,帮助新手创建愉快的,可识别的面部草图,而不需要事先的艺术训练。当用户在源人像照片上进行跟踪时,PortraitSketch会自动调整几何形状和笔画参数(厚度、不透明度等),以提高素描的美学质量。我们提出了基于底层源图像的重要特征来调整轮廓和阴影笔画的算法。与自动风格化系统相比,PortraitSketch旨在鼓励用户的所有权和成就感。为此,所有的调整都是实时执行的,用户最终直接在画布上绘制所有的笔画。我们的用户研究结果表明,用户更喜欢在一些自动辅助下绘画,从而产生更好的绘画,而且这种辅助并不会降低在创作过程中参与的感知水平。
{"title":"PortraitSketch: face sketching assistance for novices","authors":"Jun Xie, Aaron Hertzmann, Wilmot Li, H. Winnemöller","doi":"10.1145/2642918.2647399","DOIUrl":"https://doi.org/10.1145/2642918.2647399","url":null,"abstract":"We present PortraitSketch, an interactive drawing system that helps novices create pleasing, recognizable face sketches without requiring prior artistic training. As the user traces over a source portrait photograph, PortraitSketch automatically adjusts the geometry and stroke parameters (thickness, opacity, etc.) to improve the aesthetic quality of the sketch. We present algorithms for adjusting both outlines and shading strokes based on important features of the underlying source image. In contrast to automatic stylization systems, PortraitSketch is designed to encourage a sense of ownership and accomplishment in the user. To this end, all adjustments are performed in real-time, and the user ends up directly drawing all strokes on the canvas. The findings from our user study suggest that users prefer drawing with some automatic assistance, thereby producing better drawings, and that assistance does not decrease the perceived level of involvement in the creative process.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78603547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
World-stabilized annotations and virtual scene navigation for remote collaboration 用于远程协作的世界稳定注释和虚拟场景导航
Steffen Gauglitz, B. Nuernberger, M. Turk, Tobias Höllerer
We present a system that supports an augmented shared visual space for live mobile remote collaboration on physical tasks. The remote user can explore the scene independently of the local user's current camera position and can communicate via spatial annotations that are immediately visible to the local user in augmented reality. Our system operates on off-the-shelf hardware and uses real-time visual tracking and modeling, thus not requiring any preparation or instrumentation of the environment. It creates a synergy between video conferencing and remote scene exploration under a unique coherent interface. To evaluate the collaboration with our system, we conducted an extensive outdoor user study with 60 participants comparing our system with two baseline interfaces. Our results indicate an overwhelming user preference (80%) for our system, a high level of usability, as well as performance benefits compared with one of the two baselines.
我们提出了一个支持增强共享视觉空间的系统,用于物理任务的实时移动远程协作。远程用户可以独立于本地用户当前的相机位置探索场景,并可以通过增强现实中本地用户立即可见的空间注释进行通信。我们的系统在现成的硬件上运行,并使用实时视觉跟踪和建模,因此不需要任何准备或环境仪器。它在一个独特的连贯界面下创建了视频会议和远程场景探索之间的协同作用。为了评估与我们系统的协作,我们进行了一项广泛的户外用户研究,有60名参与者将我们的系统与两个基线界面进行了比较。我们的结果表明,我们的系统具有压倒性的用户偏好(80%)、高水平的可用性,以及与两个基线之一相比的性能优势。
{"title":"World-stabilized annotations and virtual scene navigation for remote collaboration","authors":"Steffen Gauglitz, B. Nuernberger, M. Turk, Tobias Höllerer","doi":"10.1145/2642918.2647372","DOIUrl":"https://doi.org/10.1145/2642918.2647372","url":null,"abstract":"We present a system that supports an augmented shared visual space for live mobile remote collaboration on physical tasks. The remote user can explore the scene independently of the local user's current camera position and can communicate via spatial annotations that are immediately visible to the local user in augmented reality. Our system operates on off-the-shelf hardware and uses real-time visual tracking and modeling, thus not requiring any preparation or instrumentation of the environment. It creates a synergy between video conferencing and remote scene exploration under a unique coherent interface. To evaluate the collaboration with our system, we conducted an extensive outdoor user study with 60 participants comparing our system with two baseline interfaces. Our results indicate an overwhelming user preference (80%) for our system, a high level of usability, as well as performance benefits compared with one of the two baselines.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81213715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 180
Going to the dogs: towards an interactive touchscreen interface for working dogs 走向狗狗:面向工作犬的交互式触摸屏界面
C. Zeagler, Scott M. Gilliland, Larry Freil, Thad Starner, M. Jackson
Computer-mediated interaction for working dogs is an important new domain for interaction research. In domestic settings, touchscreens could provide a way for dogs to communicate critical information to humans. In this paper we explore how a dog might interact with a touchscreen interface. We observe dogs' touchscreen interactions and record difficulties against what is expected of humans' touchscreen interactions. We also solve hardware issues through screen adaptations and projection styles to make a touchscreen usable for a canine's nose touch interactions. We also compare our canine touch data to humans' touch data on the same system. Our goal is to understand the affordances needed to make touchscreen interfaces usable for canines and help the future design of touchscreen interfaces for assistive dogs in the home.
工作犬的计算机交互是一个重要的交互研究新领域。在家庭环境中,触摸屏可以为狗提供一种与人类交流关键信息的方式。在本文中,我们探讨了狗如何与触摸屏界面进行交互。我们观察了狗的触屏互动,并记录了人类触屏互动的困难。我们还通过屏幕适应和投影风格解决硬件问题,使触摸屏可用于狗的鼻子触摸交互。我们还将狗的触摸数据与人类在同一系统上的触摸数据进行了比较。我们的目标是了解使触摸屏界面对犬类可用所需的功能,并帮助未来设计用于家庭辅助犬类的触摸屏界面。
{"title":"Going to the dogs: towards an interactive touchscreen interface for working dogs","authors":"C. Zeagler, Scott M. Gilliland, Larry Freil, Thad Starner, M. Jackson","doi":"10.1145/2642918.2647364","DOIUrl":"https://doi.org/10.1145/2642918.2647364","url":null,"abstract":"Computer-mediated interaction for working dogs is an important new domain for interaction research. In domestic settings, touchscreens could provide a way for dogs to communicate critical information to humans. In this paper we explore how a dog might interact with a touchscreen interface. We observe dogs' touchscreen interactions and record difficulties against what is expected of humans' touchscreen interactions. We also solve hardware issues through screen adaptations and projection styles to make a touchscreen usable for a canine's nose touch interactions. We also compare our canine touch data to humans' touch data on the same system. Our goal is to understand the affordances needed to make touchscreen interfaces usable for canines and help the future design of touchscreen interfaces for assistive dogs in the home.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79506690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Reflection 反射
Yang Li
By knowing which upcoming action a user might perform, a mobile application can optimize its user interface for accomplishing the task. However, it is technically challenging for developers to implement event prediction in their own application. We created Reflection, an on-device service that answers queries from a mobile application regarding which actions the user is likely to perform at a given time. Any application can register itself and communicate with Reflection via a simple API. Reflection continuously learns a prediction model for each application based on its evolving event history. It employs a novel method for prediction by 1) combining multiple well-designed predictors with an online learning method, and 2) capturing event patterns not only within but also across registered applications--only possible as an infrastructure solution. We evaluated Reflection with two sets of large-scale, in situ mobile event logs, which showed our infrastructure approach is feasible.
{"title":"Reflection","authors":"Yang Li","doi":"10.1145/2642918.2647355","DOIUrl":"https://doi.org/10.1145/2642918.2647355","url":null,"abstract":"By knowing which upcoming action a user might perform, a mobile application can optimize its user interface for accomplishing the task. However, it is technically challenging for developers to implement event prediction in their own application. We created Reflection, an on-device service that answers queries from a mobile application regarding which actions the user is likely to perform at a given time. Any application can register itself and communicate with Reflection via a simple API. Reflection continuously learns a prediction model for each application based on its evolving event history. It employs a novel method for prediction by 1) combining multiple well-designed predictors with an online learning method, and 2) capturing event patterns not only within but also across registered applications--only possible as an infrastructure solution. We evaluated Reflection with two sets of large-scale, in situ mobile event logs, which showed our infrastructure approach is feasible.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"106 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84197609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
ParaFrustum: visualization techniques for guiding a user to a constrained set of viewing positions and orientations 顶架:引导用户到一组受限的观看位置和方向的可视化技术
Mengu Sukan, Carmine Elvezio, Ohan Oda, Steven K. Feiner, B. Tversky
Many tasks in real or virtual environments require users to view a target object or location from one of a set of strategic viewpoints to see it in context, avoid occlusions, or view it at an appropriate angle or distance. We introduce ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions. ParaFrustum is inspired by the look-from and look-at points of a computer graphics camera specification, which precisely delineate a location for the camera and a direction in which it looks. We generalize this approach by defining a ParaFrustum in terms of a look-from volume and a look-at volume, which establish constraints on a range of acceptable locations for the user's eyes and a range of acceptable angles in which the user's head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DoF pose when it is not required by the task. We describe two visualization techniques for virtual or augmented reality that guide a user to assume one of the poses defined by a ParaFrustum, and present the results of a user study measuring the performance of these techniques. The study shows that the constraints of a tightly constrained ParaFrustum (e.g., approximating a conventional camera frustum) require significantly more time to satisfy than those of a loosely constrained one. The study also reveals interesting differences in participant trajectories in response to the two techniques.
现实或虚拟环境中的许多任务要求用户从一组战略视点中查看目标物体或位置,以便在上下文中看到它,避免遮挡,或者在适当的角度或距离上查看它。我们介绍了ParaFrustum,这是一个几何结构,代表了这组战略观点和观看方向。ParaFrustum的灵感来自于计算机图形相机规范的look-from和look-at点,它精确地描绘了相机的位置和它的观察方向。我们通过定义ParaFrustum在look-from体积和look-at体积方面概括了这种方法,这对用户眼睛可接受的位置范围和用户头部可接受的角度范围建立了约束。在允许的观看位置和方向上提供公差,避免了用户在任务不需要时承担严格约束的6DoF姿势的负担。我们描述了两种用于虚拟或增强现实的可视化技术,它们指导用户假设由ParaFrustum定义的姿势之一,并展示了测量这些技术性能的用户研究结果。研究表明,紧约束的准锥体(例如,近似于传统的相机锥体)的约束比松约束的约束需要更多的时间来满足。该研究还揭示了参与者对这两种技术的反应轨迹的有趣差异。
{"title":"ParaFrustum: visualization techniques for guiding a user to a constrained set of viewing positions and orientations","authors":"Mengu Sukan, Carmine Elvezio, Ohan Oda, Steven K. Feiner, B. Tversky","doi":"10.1145/2642918.2647417","DOIUrl":"https://doi.org/10.1145/2642918.2647417","url":null,"abstract":"Many tasks in real or virtual environments require users to view a target object or location from one of a set of strategic viewpoints to see it in context, avoid occlusions, or view it at an appropriate angle or distance. We introduce ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions. ParaFrustum is inspired by the look-from and look-at points of a computer graphics camera specification, which precisely delineate a location for the camera and a direction in which it looks. We generalize this approach by defining a ParaFrustum in terms of a look-from volume and a look-at volume, which establish constraints on a range of acceptable locations for the user's eyes and a range of acceptable angles in which the user's head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DoF pose when it is not required by the task. We describe two visualization techniques for virtual or augmented reality that guide a user to assume one of the poses defined by a ParaFrustum, and present the results of a user study measuring the performance of these techniques. The study shows that the constraints of a tightly constrained ParaFrustum (e.g., approximating a conventional camera frustum) require significantly more time to satisfy than those of a loosely constrained one. The study also reveals interesting differences in participant trajectories in response to the two techniques.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87647493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Sensing techniques for tablet+stylus interaction 平板电脑+触控笔交互的传感技术
K. Hinckley, M. Pahud, Hrvoje Benko, Pourang Irani, François Guimbretière, M. Gavriliu, Xiang 'Anthony' Chen, Fabrice Matulic, W. Buxton, Andrew D. Wilson
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.
我们探索抓握和运动感应,以提供新的技术,利用用户如何自然地操纵平板电脑和触控笔设备在笔+触摸交互。我们可以检测用户是握笔还是夹笔。我们可以区分徒手输入,如非首选手产生的拖拽和捏捏手势,以及握笔的手产生的触摸手势,后者必然会向触控笔传递可检测的运动信号。我们可以感知哪只手握着平板电脑,并确定屏幕与笔的相对方向。通过选择性地组合这些信号并使用它们来相互补充,我们可以根据上下文定制交互,例如在书写时忽略无意的触摸输入,或者支持适合上下文的工具,例如当用户将笔夹在手指之间时显示详细笔画的放大镜。这些技术和其他技术可以为平板电脑上的笔+触摸交互带来前所未有的微妙之处。
{"title":"Sensing techniques for tablet+stylus interaction","authors":"K. Hinckley, M. Pahud, Hrvoje Benko, Pourang Irani, François Guimbretière, M. Gavriliu, Xiang 'Anthony' Chen, Fabrice Matulic, W. Buxton, Andrew D. Wilson","doi":"10.1145/2642918.2647379","DOIUrl":"https://doi.org/10.1145/2642918.2647379","url":null,"abstract":"We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87919239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Content-aware kinetic scrolling for supporting web page navigation 支持网页导航的内容感知动态滚动
Juho Kim, Amy X. Zhang, Jihee Kim, Rob Miller, Krzysztof Z Gajos
Long documents are abundant on the web today, and are accessed in increasing numbers from touchscreen devices such as mobile phones and tablets. Navigating long documents with small screens can be challenging both physically and cognitively because they compel the user to scroll a great deal and to mentally filter for important content. To support navigation of long documents on touchscreen devices, we introduce content-aware kinetic scrolling, a novel scrolling technique that dynamically applies pseudo-haptic feedback in the form of friction around points of high interest within the page. This allows users to quickly find interesting content while exploring without further cluttering the limited visual space. To model degrees of interest (DOI) for a variety of existing web pages, we introduce social wear, a method for capturing DOI based on social signals that indicate collective user interest. Our preliminary evaluation shows that users pay attention to items with kinetic scrolling feedback during search, recognition, and skimming tasks.
如今,长文档在网络上随处可见,而且越来越多的人通过手机和平板电脑等触屏设备访问这些文档。在小屏幕上浏览长文档在身体上和认知上都是一个挑战,因为它们迫使用户大量滚动,并在心理上过滤重要的内容。为了在触摸屏设备上支持长文档的导航,我们引入了内容感知的动态滚动,这是一种新颖的滚动技术,它以页面中高兴趣点周围摩擦的形式动态应用伪触觉反馈。这允许用户在探索的同时快速找到有趣的内容,而不会进一步扰乱有限的视觉空间。为了对各种现有网页的兴趣度(DOI)建模,我们引入了社会磨损,这是一种基于表明集体用户兴趣的社会信号捕获DOI的方法。我们的初步评估表明,在搜索、识别和略读任务中,用户会关注带有动态滚动反馈的项目。
{"title":"Content-aware kinetic scrolling for supporting web page navigation","authors":"Juho Kim, Amy X. Zhang, Jihee Kim, Rob Miller, Krzysztof Z Gajos","doi":"10.1145/2642918.2647401","DOIUrl":"https://doi.org/10.1145/2642918.2647401","url":null,"abstract":"Long documents are abundant on the web today, and are accessed in increasing numbers from touchscreen devices such as mobile phones and tablets. Navigating long documents with small screens can be challenging both physically and cognitively because they compel the user to scroll a great deal and to mentally filter for important content. To support navigation of long documents on touchscreen devices, we introduce content-aware kinetic scrolling, a novel scrolling technique that dynamically applies pseudo-haptic feedback in the form of friction around points of high interest within the page. This allows users to quickly find interesting content while exploring without further cluttering the limited visual space. To model degrees of interest (DOI) for a variety of existing web pages, we introduce social wear, a method for capturing DOI based on social signals that indicate collective user interest. Our preliminary evaluation shows that users pay attention to items with kinetic scrolling feedback during search, recognition, and skimming tasks.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87749808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
SideSwipe: detecting in-air gestures around mobile devices using actual GSM signal SideSwipe:使用实际的GSM信号检测移动设备周围的空中手势
Chen Zhao, Ke-Yu Chen, Md Tanvir Islam Aumi, Shwetak N. Patel, M. Reynolds
Current smartphone inputs are limited to physical buttons, touchscreens, cameras or built-in sensors. These approaches either require a dedicated surface or line-of-sight for interaction. We introduce SideSwipe, a novel system that enables in-air gestures both above and around a mobile device. Our system leverages the actual GSM signal to detect hand gestures around the device. We developed an algorithm to convert the discrete and bursty GSM pulses to a continuous wave that can be used for gesture recognition. Specifically, when a user waves their hand near the phone, the hand movement disturbs the signal propagation between the phone's transmitter and added receiving antennas. Our system captures this variation and uses it for gesture recognition. To evaluate our system, we conduct a study with 10 participants and present robust gesture recognition with an average accuracy of 87.2% across 14 hand gestures.
目前智能手机的输入方式仅限于物理按键、触摸屏、摄像头或内置传感器。这些方法需要一个专用的表面或视线进行交互。我们介绍了SideSwipe,这是一个新颖的系统,可以在移动设备上方和周围进行空中手势。我们的系统利用实际的GSM信号来检测设备周围的手势。我们开发了一种算法,将离散和突发的GSM脉冲转换为可用于手势识别的连续波。具体来说,当用户在手机附近挥手时,手的动作会干扰手机发射器和附加接收天线之间的信号传播。我们的系统捕捉这种变化,并将其用于手势识别。为了评估我们的系统,我们对10名参与者进行了一项研究,并展示了14种手势的鲁棒手势识别,平均准确率为87.2%。
{"title":"SideSwipe: detecting in-air gestures around mobile devices using actual GSM signal","authors":"Chen Zhao, Ke-Yu Chen, Md Tanvir Islam Aumi, Shwetak N. Patel, M. Reynolds","doi":"10.1145/2642918.2647380","DOIUrl":"https://doi.org/10.1145/2642918.2647380","url":null,"abstract":"Current smartphone inputs are limited to physical buttons, touchscreens, cameras or built-in sensors. These approaches either require a dedicated surface or line-of-sight for interaction. We introduce SideSwipe, a novel system that enables in-air gestures both above and around a mobile device. Our system leverages the actual GSM signal to detect hand gestures around the device. We developed an algorithm to convert the discrete and bursty GSM pulses to a continuous wave that can be used for gesture recognition. Specifically, when a user waves their hand near the phone, the hand movement disturbs the signal propagation between the phone's transmitter and added receiving antennas. Our system captures this variation and uses it for gesture recognition. To evaluate our system, we conduct a study with 10 participants and present robust gesture recognition with an average accuracy of 87.2% across 14 hand gestures.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83432749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
HaptoMime: mid-air haptic interaction with a floating virtual screen HaptoMime:带有浮动虚拟屏幕的半空中触觉交互
Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, Seki Inoue, H. Shinoda
We present HaptoMime, a mid-air interaction system that allows users to touch a floating virtual screen with hands-free tactile feedback. Floating images formed by tailored light beams are inherently lacking in tactile feedback. Here we propose a method to superpose hands-free tactile feedback on such a floating image using ultrasound. By tracking a fingertip with an electronically steerable ultrasonic beam, the fingertip encounters a mechanical force consistent with the floating image. We demonstrate and characterize the proposed transmission scheme and discuss promising applications with an emphasis that it helps us 'pantomime' in mid-air.
我们介绍了HaptoMime,一个空中交互系统,允许用户触摸一个浮动的虚拟屏幕,并提供免提触觉反馈。由量身定制的光束形成的浮动图像本质上缺乏触觉反馈。在这里,我们提出了一种方法,将免提触觉反馈叠加在这种浮动图像上,使用超声波。通过电子可操纵的超声波光束跟踪指尖,指尖会遇到与浮动图像一致的机械力。我们演示并描述了提出的传输方案,并讨论了有前途的应用,重点是它可以帮助我们在半空中“哑剧”。
{"title":"HaptoMime: mid-air haptic interaction with a floating virtual screen","authors":"Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, Seki Inoue, H. Shinoda","doi":"10.1145/2642918.2647407","DOIUrl":"https://doi.org/10.1145/2642918.2647407","url":null,"abstract":"We present HaptoMime, a mid-air interaction system that allows users to touch a floating virtual screen with hands-free tactile feedback. Floating images formed by tailored light beams are inherently lacking in tactile feedback. Here we propose a method to superpose hands-free tactile feedback on such a floating image using ultrasound. By tracking a fingertip with an electronically steerable ultrasonic beam, the fingertip encounters a mechanical force consistent with the floating image. We demonstrate and characterize the proposed transmission scheme and discuss promising applications with an emphasis that it helps us 'pantomime' in mid-air.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87285018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
Pinch-to-zoom-plus: an enhanced pinch-to-zoom that reduces clutching and panning 捏缩放+:一个增强的捏缩放,减少抓握和平移
J. Avery, Mark Choi, Daniel Vogel, E. Lank
Despite its popularity, the classic pinch-to-zoom gesture used in modern multi-touch interfaces has drawbacks: specifically, the need to support an extended range of scales and the need to keep content within the view window on the display can result in the need to clutch and pan. In two formative studies of unimanual and bimanual pinch-to-zoom, we found patterns: zooming actions follows a predictable ballistic velocity curve, and users tend to pan the point-of-interest towards the center of the screen. We apply these results to design an enhanced zooming technique called Pinch-to-Zoom-Plus (PZP) that reduces clutching and panning operations compared to standard pinch-to-zoom behaviour.
尽管它很受欢迎,但现代多点触摸界面中使用的经典捏缩放手势有缺点:具体来说,需要支持扩展范围的缩放,并且需要将内容保持在显示器的视图窗口内,这可能导致需要抓握和平移。在单手和双手捏缩放的两项形成性研究中,我们发现了模式:缩放操作遵循可预测的弹道速度曲线,用户倾向于将兴趣点移向屏幕中心。我们将这些结果应用于设计一种增强的缩放技术,称为缩放- plus (PZP),与标准的缩放行为相比,它减少了抓紧和平移操作。
{"title":"Pinch-to-zoom-plus: an enhanced pinch-to-zoom that reduces clutching and panning","authors":"J. Avery, Mark Choi, Daniel Vogel, E. Lank","doi":"10.1145/2642918.2647352","DOIUrl":"https://doi.org/10.1145/2642918.2647352","url":null,"abstract":"Despite its popularity, the classic pinch-to-zoom gesture used in modern multi-touch interfaces has drawbacks: specifically, the need to support an extended range of scales and the need to keep content within the view window on the display can result in the need to clutch and pan. In two formative studies of unimanual and bimanual pinch-to-zoom, we found patterns: zooming actions follows a predictable ballistic velocity curve, and users tend to pan the point-of-interest towards the center of the screen. We apply these results to design an enhanced zooming technique called Pinch-to-Zoom-Plus (PZP) that reduces clutching and panning operations compared to standard pinch-to-zoom behaviour.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85261503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
期刊
Proceedings of the 27th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1