Jun Xie, Aaron Hertzmann, Wilmot Li, H. Winnemöller
We present PortraitSketch, an interactive drawing system that helps novices create pleasing, recognizable face sketches without requiring prior artistic training. As the user traces over a source portrait photograph, PortraitSketch automatically adjusts the geometry and stroke parameters (thickness, opacity, etc.) to improve the aesthetic quality of the sketch. We present algorithms for adjusting both outlines and shading strokes based on important features of the underlying source image. In contrast to automatic stylization systems, PortraitSketch is designed to encourage a sense of ownership and accomplishment in the user. To this end, all adjustments are performed in real-time, and the user ends up directly drawing all strokes on the canvas. The findings from our user study suggest that users prefer drawing with some automatic assistance, thereby producing better drawings, and that assistance does not decrease the perceived level of involvement in the creative process.
{"title":"PortraitSketch: face sketching assistance for novices","authors":"Jun Xie, Aaron Hertzmann, Wilmot Li, H. Winnemöller","doi":"10.1145/2642918.2647399","DOIUrl":"https://doi.org/10.1145/2642918.2647399","url":null,"abstract":"We present PortraitSketch, an interactive drawing system that helps novices create pleasing, recognizable face sketches without requiring prior artistic training. As the user traces over a source portrait photograph, PortraitSketch automatically adjusts the geometry and stroke parameters (thickness, opacity, etc.) to improve the aesthetic quality of the sketch. We present algorithms for adjusting both outlines and shading strokes based on important features of the underlying source image. In contrast to automatic stylization systems, PortraitSketch is designed to encourage a sense of ownership and accomplishment in the user. To this end, all adjustments are performed in real-time, and the user ends up directly drawing all strokes on the canvas. The findings from our user study suggest that users prefer drawing with some automatic assistance, thereby producing better drawings, and that assistance does not decrease the perceived level of involvement in the creative process.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78603547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steffen Gauglitz, B. Nuernberger, M. Turk, Tobias Höllerer
We present a system that supports an augmented shared visual space for live mobile remote collaboration on physical tasks. The remote user can explore the scene independently of the local user's current camera position and can communicate via spatial annotations that are immediately visible to the local user in augmented reality. Our system operates on off-the-shelf hardware and uses real-time visual tracking and modeling, thus not requiring any preparation or instrumentation of the environment. It creates a synergy between video conferencing and remote scene exploration under a unique coherent interface. To evaluate the collaboration with our system, we conducted an extensive outdoor user study with 60 participants comparing our system with two baseline interfaces. Our results indicate an overwhelming user preference (80%) for our system, a high level of usability, as well as performance benefits compared with one of the two baselines.
{"title":"World-stabilized annotations and virtual scene navigation for remote collaboration","authors":"Steffen Gauglitz, B. Nuernberger, M. Turk, Tobias Höllerer","doi":"10.1145/2642918.2647372","DOIUrl":"https://doi.org/10.1145/2642918.2647372","url":null,"abstract":"We present a system that supports an augmented shared visual space for live mobile remote collaboration on physical tasks. The remote user can explore the scene independently of the local user's current camera position and can communicate via spatial annotations that are immediately visible to the local user in augmented reality. Our system operates on off-the-shelf hardware and uses real-time visual tracking and modeling, thus not requiring any preparation or instrumentation of the environment. It creates a synergy between video conferencing and remote scene exploration under a unique coherent interface. To evaluate the collaboration with our system, we conducted an extensive outdoor user study with 60 participants comparing our system with two baseline interfaces. Our results indicate an overwhelming user preference (80%) for our system, a high level of usability, as well as performance benefits compared with one of the two baselines.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81213715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Zeagler, Scott M. Gilliland, Larry Freil, Thad Starner, M. Jackson
Computer-mediated interaction for working dogs is an important new domain for interaction research. In domestic settings, touchscreens could provide a way for dogs to communicate critical information to humans. In this paper we explore how a dog might interact with a touchscreen interface. We observe dogs' touchscreen interactions and record difficulties against what is expected of humans' touchscreen interactions. We also solve hardware issues through screen adaptations and projection styles to make a touchscreen usable for a canine's nose touch interactions. We also compare our canine touch data to humans' touch data on the same system. Our goal is to understand the affordances needed to make touchscreen interfaces usable for canines and help the future design of touchscreen interfaces for assistive dogs in the home.
{"title":"Going to the dogs: towards an interactive touchscreen interface for working dogs","authors":"C. Zeagler, Scott M. Gilliland, Larry Freil, Thad Starner, M. Jackson","doi":"10.1145/2642918.2647364","DOIUrl":"https://doi.org/10.1145/2642918.2647364","url":null,"abstract":"Computer-mediated interaction for working dogs is an important new domain for interaction research. In domestic settings, touchscreens could provide a way for dogs to communicate critical information to humans. In this paper we explore how a dog might interact with a touchscreen interface. We observe dogs' touchscreen interactions and record difficulties against what is expected of humans' touchscreen interactions. We also solve hardware issues through screen adaptations and projection styles to make a touchscreen usable for a canine's nose touch interactions. We also compare our canine touch data to humans' touch data on the same system. Our goal is to understand the affordances needed to make touchscreen interfaces usable for canines and help the future design of touchscreen interfaces for assistive dogs in the home.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79506690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By knowing which upcoming action a user might perform, a mobile application can optimize its user interface for accomplishing the task. However, it is technically challenging for developers to implement event prediction in their own application. We created Reflection, an on-device service that answers queries from a mobile application regarding which actions the user is likely to perform at a given time. Any application can register itself and communicate with Reflection via a simple API. Reflection continuously learns a prediction model for each application based on its evolving event history. It employs a novel method for prediction by 1) combining multiple well-designed predictors with an online learning method, and 2) capturing event patterns not only within but also across registered applications--only possible as an infrastructure solution. We evaluated Reflection with two sets of large-scale, in situ mobile event logs, which showed our infrastructure approach is feasible.
{"title":"Reflection","authors":"Yang Li","doi":"10.1145/2642918.2647355","DOIUrl":"https://doi.org/10.1145/2642918.2647355","url":null,"abstract":"By knowing which upcoming action a user might perform, a mobile application can optimize its user interface for accomplishing the task. However, it is technically challenging for developers to implement event prediction in their own application. We created Reflection, an on-device service that answers queries from a mobile application regarding which actions the user is likely to perform at a given time. Any application can register itself and communicate with Reflection via a simple API. Reflection continuously learns a prediction model for each application based on its evolving event history. It employs a novel method for prediction by 1) combining multiple well-designed predictors with an online learning method, and 2) capturing event patterns not only within but also across registered applications--only possible as an infrastructure solution. We evaluated Reflection with two sets of large-scale, in situ mobile event logs, which showed our infrastructure approach is feasible.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"106 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84197609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengu Sukan, Carmine Elvezio, Ohan Oda, Steven K. Feiner, B. Tversky
Many tasks in real or virtual environments require users to view a target object or location from one of a set of strategic viewpoints to see it in context, avoid occlusions, or view it at an appropriate angle or distance. We introduce ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions. ParaFrustum is inspired by the look-from and look-at points of a computer graphics camera specification, which precisely delineate a location for the camera and a direction in which it looks. We generalize this approach by defining a ParaFrustum in terms of a look-from volume and a look-at volume, which establish constraints on a range of acceptable locations for the user's eyes and a range of acceptable angles in which the user's head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DoF pose when it is not required by the task. We describe two visualization techniques for virtual or augmented reality that guide a user to assume one of the poses defined by a ParaFrustum, and present the results of a user study measuring the performance of these techniques. The study shows that the constraints of a tightly constrained ParaFrustum (e.g., approximating a conventional camera frustum) require significantly more time to satisfy than those of a loosely constrained one. The study also reveals interesting differences in participant trajectories in response to the two techniques.
{"title":"ParaFrustum: visualization techniques for guiding a user to a constrained set of viewing positions and orientations","authors":"Mengu Sukan, Carmine Elvezio, Ohan Oda, Steven K. Feiner, B. Tversky","doi":"10.1145/2642918.2647417","DOIUrl":"https://doi.org/10.1145/2642918.2647417","url":null,"abstract":"Many tasks in real or virtual environments require users to view a target object or location from one of a set of strategic viewpoints to see it in context, avoid occlusions, or view it at an appropriate angle or distance. We introduce ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions. ParaFrustum is inspired by the look-from and look-at points of a computer graphics camera specification, which precisely delineate a location for the camera and a direction in which it looks. We generalize this approach by defining a ParaFrustum in terms of a look-from volume and a look-at volume, which establish constraints on a range of acceptable locations for the user's eyes and a range of acceptable angles in which the user's head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DoF pose when it is not required by the task. We describe two visualization techniques for virtual or augmented reality that guide a user to assume one of the poses defined by a ParaFrustum, and present the results of a user study measuring the performance of these techniques. The study shows that the constraints of a tightly constrained ParaFrustum (e.g., approximating a conventional camera frustum) require significantly more time to satisfy than those of a loosely constrained one. The study also reveals interesting differences in participant trajectories in response to the two techniques.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87647493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Hinckley, M. Pahud, Hrvoje Benko, Pourang Irani, François Guimbretière, M. Gavriliu, Xiang 'Anthony' Chen, Fabrice Matulic, W. Buxton, Andrew D. Wilson
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.
{"title":"Sensing techniques for tablet+stylus interaction","authors":"K. Hinckley, M. Pahud, Hrvoje Benko, Pourang Irani, François Guimbretière, M. Gavriliu, Xiang 'Anthony' Chen, Fabrice Matulic, W. Buxton, Andrew D. Wilson","doi":"10.1145/2642918.2647379","DOIUrl":"https://doi.org/10.1145/2642918.2647379","url":null,"abstract":"We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87919239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juho Kim, Amy X. Zhang, Jihee Kim, Rob Miller, Krzysztof Z Gajos
Long documents are abundant on the web today, and are accessed in increasing numbers from touchscreen devices such as mobile phones and tablets. Navigating long documents with small screens can be challenging both physically and cognitively because they compel the user to scroll a great deal and to mentally filter for important content. To support navigation of long documents on touchscreen devices, we introduce content-aware kinetic scrolling, a novel scrolling technique that dynamically applies pseudo-haptic feedback in the form of friction around points of high interest within the page. This allows users to quickly find interesting content while exploring without further cluttering the limited visual space. To model degrees of interest (DOI) for a variety of existing web pages, we introduce social wear, a method for capturing DOI based on social signals that indicate collective user interest. Our preliminary evaluation shows that users pay attention to items with kinetic scrolling feedback during search, recognition, and skimming tasks.
{"title":"Content-aware kinetic scrolling for supporting web page navigation","authors":"Juho Kim, Amy X. Zhang, Jihee Kim, Rob Miller, Krzysztof Z Gajos","doi":"10.1145/2642918.2647401","DOIUrl":"https://doi.org/10.1145/2642918.2647401","url":null,"abstract":"Long documents are abundant on the web today, and are accessed in increasing numbers from touchscreen devices such as mobile phones and tablets. Navigating long documents with small screens can be challenging both physically and cognitively because they compel the user to scroll a great deal and to mentally filter for important content. To support navigation of long documents on touchscreen devices, we introduce content-aware kinetic scrolling, a novel scrolling technique that dynamically applies pseudo-haptic feedback in the form of friction around points of high interest within the page. This allows users to quickly find interesting content while exploring without further cluttering the limited visual space. To model degrees of interest (DOI) for a variety of existing web pages, we introduce social wear, a method for capturing DOI based on social signals that indicate collective user interest. Our preliminary evaluation shows that users pay attention to items with kinetic scrolling feedback during search, recognition, and skimming tasks.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87749808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Zhao, Ke-Yu Chen, Md Tanvir Islam Aumi, Shwetak N. Patel, M. Reynolds
Current smartphone inputs are limited to physical buttons, touchscreens, cameras or built-in sensors. These approaches either require a dedicated surface or line-of-sight for interaction. We introduce SideSwipe, a novel system that enables in-air gestures both above and around a mobile device. Our system leverages the actual GSM signal to detect hand gestures around the device. We developed an algorithm to convert the discrete and bursty GSM pulses to a continuous wave that can be used for gesture recognition. Specifically, when a user waves their hand near the phone, the hand movement disturbs the signal propagation between the phone's transmitter and added receiving antennas. Our system captures this variation and uses it for gesture recognition. To evaluate our system, we conduct a study with 10 participants and present robust gesture recognition with an average accuracy of 87.2% across 14 hand gestures.
{"title":"SideSwipe: detecting in-air gestures around mobile devices using actual GSM signal","authors":"Chen Zhao, Ke-Yu Chen, Md Tanvir Islam Aumi, Shwetak N. Patel, M. Reynolds","doi":"10.1145/2642918.2647380","DOIUrl":"https://doi.org/10.1145/2642918.2647380","url":null,"abstract":"Current smartphone inputs are limited to physical buttons, touchscreens, cameras or built-in sensors. These approaches either require a dedicated surface or line-of-sight for interaction. We introduce SideSwipe, a novel system that enables in-air gestures both above and around a mobile device. Our system leverages the actual GSM signal to detect hand gestures around the device. We developed an algorithm to convert the discrete and bursty GSM pulses to a continuous wave that can be used for gesture recognition. Specifically, when a user waves their hand near the phone, the hand movement disturbs the signal propagation between the phone's transmitter and added receiving antennas. Our system captures this variation and uses it for gesture recognition. To evaluate our system, we conduct a study with 10 participants and present robust gesture recognition with an average accuracy of 87.2% across 14 hand gestures.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83432749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, Seki Inoue, H. Shinoda
We present HaptoMime, a mid-air interaction system that allows users to touch a floating virtual screen with hands-free tactile feedback. Floating images formed by tailored light beams are inherently lacking in tactile feedback. Here we propose a method to superpose hands-free tactile feedback on such a floating image using ultrasound. By tracking a fingertip with an electronically steerable ultrasonic beam, the fingertip encounters a mechanical force consistent with the floating image. We demonstrate and characterize the proposed transmission scheme and discuss promising applications with an emphasis that it helps us 'pantomime' in mid-air.
{"title":"HaptoMime: mid-air haptic interaction with a floating virtual screen","authors":"Y. Monnai, K. Hasegawa, M. Fujiwara, K. Yoshino, Seki Inoue, H. Shinoda","doi":"10.1145/2642918.2647407","DOIUrl":"https://doi.org/10.1145/2642918.2647407","url":null,"abstract":"We present HaptoMime, a mid-air interaction system that allows users to touch a floating virtual screen with hands-free tactile feedback. Floating images formed by tailored light beams are inherently lacking in tactile feedback. Here we propose a method to superpose hands-free tactile feedback on such a floating image using ultrasound. By tracking a fingertip with an electronically steerable ultrasonic beam, the fingertip encounters a mechanical force consistent with the floating image. We demonstrate and characterize the proposed transmission scheme and discuss promising applications with an emphasis that it helps us 'pantomime' in mid-air.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87285018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite its popularity, the classic pinch-to-zoom gesture used in modern multi-touch interfaces has drawbacks: specifically, the need to support an extended range of scales and the need to keep content within the view window on the display can result in the need to clutch and pan. In two formative studies of unimanual and bimanual pinch-to-zoom, we found patterns: zooming actions follows a predictable ballistic velocity curve, and users tend to pan the point-of-interest towards the center of the screen. We apply these results to design an enhanced zooming technique called Pinch-to-Zoom-Plus (PZP) that reduces clutching and panning operations compared to standard pinch-to-zoom behaviour.
尽管它很受欢迎,但现代多点触摸界面中使用的经典捏缩放手势有缺点:具体来说,需要支持扩展范围的缩放,并且需要将内容保持在显示器的视图窗口内,这可能导致需要抓握和平移。在单手和双手捏缩放的两项形成性研究中,我们发现了模式:缩放操作遵循可预测的弹道速度曲线,用户倾向于将兴趣点移向屏幕中心。我们将这些结果应用于设计一种增强的缩放技术,称为缩放- plus (PZP),与标准的缩放行为相比,它减少了抓紧和平移操作。
{"title":"Pinch-to-zoom-plus: an enhanced pinch-to-zoom that reduces clutching and panning","authors":"J. Avery, Mark Choi, Daniel Vogel, E. Lank","doi":"10.1145/2642918.2647352","DOIUrl":"https://doi.org/10.1145/2642918.2647352","url":null,"abstract":"Despite its popularity, the classic pinch-to-zoom gesture used in modern multi-touch interfaces has drawbacks: specifically, the need to support an extended range of scales and the need to keep content within the view window on the display can result in the need to clutch and pan. In two formative studies of unimanual and bimanual pinch-to-zoom, we found patterns: zooming actions follows a predictable ballistic velocity curve, and users tend to pan the point-of-interest towards the center of the screen. We apply these results to design an enhanced zooming technique called Pinch-to-Zoom-Plus (PZP) that reduces clutching and panning operations compared to standard pinch-to-zoom behaviour.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85261503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}