Feedback tools help people to monitor information about themselves to improve their health, sustainability practices, or personal well-being. Yet reasoning about personal data (e.g., pedometer counts, blood pressure readings, or home electricity consumption) to gain a deep understanding of your current practices and how to change can be challenging with the data alone. We integrate quantitative feedback data within a personal digital calendar; this approach aims to make the feedback data readily accessible and more comprehensible. We report on an eight-week field study of an on-calendar visualization tool. Results showed that a personal calendar can provide rich context for people to reason about their feedback data. The on-calendar visualization enabled people to quickly identify and reason about regular patterns and anomalies. Based on our results, we also derived a model of the behavior feedback process that extends existing technology adoption models. With that, we reflected on potential barriers for the ongoing use of feedback tools.
{"title":"A Field Study of On-Calendar Visualizations","authors":"D. Huang, Melanie Tory, L. Bartram","doi":"10.20380/GI2016.03","DOIUrl":"https://doi.org/10.20380/GI2016.03","url":null,"abstract":"Feedback tools help people to monitor information about themselves to improve their health, sustainability practices, or personal well-being. Yet reasoning about personal data (e.g., pedometer counts, blood pressure readings, or home electricity consumption) to gain a deep understanding of your current practices and how to change can be challenging with the data alone. We integrate quantitative feedback data within a personal digital calendar; this approach aims to make the feedback data readily accessible and more comprehensible. We report on an eight-week field study of an on-calendar visualization tool. Results showed that a personal calendar can provide rich context for people to reason about their feedback data. The on-calendar visualization enabled people to quickly identify and reason about regular patterns and anomalies. Based on our results, we also derived a model of the behavior feedback process that extends existing technology adoption models. With that, we reflected on potential barriers for the ongoing use of feedback tools.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"31 1","pages":"13-20"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86255891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michelle Wiebe, Denise Y. Geiskkovitch, Andrea Bunt, J. Young, Melanie R. Glenwright
This paper proposes the use of graphical representations - colloquially referred to as "icons" - of app-store program categories and provides evidence via a user study that these icons can be understood by young children (aged 4-8). Given the rapid growth of this user base, providing such graphical representations is important to aid young children in navigating (under usual parental supervision) and understanding the large number of apps available. This work further provides an initial set of candidate graphical representations that have been evaluated with children, which serve as a starting point for future implementations and exploration.
{"title":"Icons for Kids: Can Young Children Understand Graphical Representations of App Store Categories?","authors":"Michelle Wiebe, Denise Y. Geiskkovitch, Andrea Bunt, J. Young, Melanie R. Glenwright","doi":"10.20380/GI2016.20","DOIUrl":"https://doi.org/10.20380/GI2016.20","url":null,"abstract":"This paper proposes the use of graphical representations - colloquially referred to as \"icons\" - of app-store program categories and provides evidence via a user study that these icons can be understood by young children (aged 4-8). Given the rapid growth of this user base, providing such graphical representations is important to aid young children in navigating (under usual parental supervision) and understanding the large number of apps available. This work further provides an initial set of candidate graphical representations that have been evaluated with children, which serve as a starting point for future implementations and exploration.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"3 1","pages":"163-166"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81460005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonggi Hong, Lee Stearns, Jon E. Froehlich, David Ross, Leah Findlater
Haptic guidance for the hand can offer an alternative to visual or audio feedback when those information channels are overloaded or inaccessible due to environmental factors, vision impairments, or hearing loss. We report on a controlled lab experiment to evaluate the impact of directional wrist-based vibro-motor feedback on hand movement, comparing lower-fidelity (4-motor) and higher-fidelity (8-motor) wristbands. Twenty blindfolded participants completed a series of trials, which consisted of interpreting a haptic stimulus and executing a 2D directional movement on a touchscreen. We compare the two conditions in terms of movement error and trial speed, but also analyze the impact of specific directions on performance. Our results show that doubling the number of haptic motors reduces directional movement error but not to the extent expected. We also empirically derive an apparent lower bound in accuracy of ~25° in interpreting and executing on the directional haptic signal.
{"title":"Evaluating Angular Accuracy of Wrist-based Haptic Directional Guidance for Hand Movement","authors":"Jonggi Hong, Lee Stearns, Jon E. Froehlich, David Ross, Leah Findlater","doi":"10.20380/GI2016.25","DOIUrl":"https://doi.org/10.20380/GI2016.25","url":null,"abstract":"Haptic guidance for the hand can offer an alternative to visual or audio feedback when those information channels are overloaded or inaccessible due to environmental factors, vision impairments, or hearing loss. We report on a controlled lab experiment to evaluate the impact of directional wrist-based vibro-motor feedback on hand movement, comparing lower-fidelity (4-motor) and higher-fidelity (8-motor) wristbands. Twenty blindfolded participants completed a series of trials, which consisted of interpreting a haptic stimulus and executing a 2D directional movement on a touchscreen. We compare the two conditions in terms of movement error and trial speed, but also analyze the impact of specific directions on performance. Our results show that doubling the number of haptic motors reduces directional movement error but not to the extent expected. We also empirically derive an apparent lower bound in accuracy of ~25° in interpreting and executing on the directional haptic signal.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"59 1","pages":"195-200"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77770450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we present an automatic shape extraction and classification method for face and eye-ware shapes. Our novel eye-ware shape extraction algorithm can extract the polygonal shape of eyeware accurately and reliably even for reflective sun-glasses and thin metal frames. Additionally, we identify key geometric features that can differentiate reliably the shape classes and we integrate them into a supervised learning technique for face and eye-ware shape classification. Finally, we incorporate the shape extraction and classification algorithms into a practical data-driven eye-ware recommendation system that we validate empirically with a user study.
{"title":"Face and Frame Classification using Geometric Features for a Data-driven Frame Recommendation System","authors":"A. Zafar, T. Popa","doi":"10.20380/GI2016.23","DOIUrl":"https://doi.org/10.20380/GI2016.23","url":null,"abstract":"In this work we present an automatic shape extraction and classification method for face and eye-ware shapes. Our novel eye-ware shape extraction algorithm can extract the polygonal shape of eyeware accurately and reliably even for reflective sun-glasses and thin metal frames. Additionally, we identify key geometric features that can differentiate reliably the shape classes and we integrate them into a supervised learning technique for face and eye-ware shape classification. Finally, we incorporate the shape extraction and classification algorithms into a practical data-driven eye-ware recommendation system that we validate empirically with a user study.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"21 1","pages":"183-188"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78042525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Fong, G. Miller, Xueqin Zhang, Ido Roll, C. Hendricks, S. Fels
Video is used extensively as an instructional aid within educational contexts such as blended (flipped) courses, self-learning with MOOCs and informal learning through online tutorials. One challenge is providing mechanisms for students to manage their video collection and quickly review or search for content. We provided students with a number of video interface features to establish which they would find most useful for video courses. From this, we designed an interface which uses textbook-style highlighting on a video filmstrip and transcript, both presented adjacent to a video player. This interface was qualitatively evaluated to determine if highlighting works well for saving intervals, and what strategies students use when given both direct video highlighting and the textbased transcript interface. Our participants reported that highlighting is a useful addition to instructional video. The familiar interaction of highlighting text was preferred, with the filmstrip used for intervals with more visual stimuli.
{"title":"An Investigation of Textbook-Style Highlighting for Video","authors":"Matthew Fong, G. Miller, Xueqin Zhang, Ido Roll, C. Hendricks, S. Fels","doi":"10.20380/GI2016.26","DOIUrl":"https://doi.org/10.20380/GI2016.26","url":null,"abstract":"Video is used extensively as an instructional aid within educational contexts such as blended (flipped) courses, self-learning with MOOCs and informal learning through online tutorials. One challenge is providing mechanisms for students to manage their video collection and quickly review or search for content. We provided students with a number of video interface features to establish which they would find most useful for video courses. From this, we designed an interface which uses textbook-style highlighting on a video filmstrip and transcript, both presented adjacent to a video player. This interface was qualitatively evaluated to determine if highlighting works well for saving intervals, and what strategies students use when given both direct video highlighting and the textbased transcript interface. Our participants reported that highlighting is a useful addition to instructional video. The familiar interaction of highlighting text was preferred, with the filmstrip used for intervals with more visual stimuli.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"28 1","pages":"201-208"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87977718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The vast majority of video content existing today is in Standard Dynamic Range (SDR) format and there is a strong interest in upscaling this content for upcoming High Dynamic Range (HDR) displays. Tone expansion or inverse tone mapping converts SDR content into HDR format using Expansion Operators (EO). In this paper, we show that current EO's do not perform as well when dealing with content of various lighting style aesthetics. In addition to this, we present a series of perceptual user studies evaluating user preference for lighting style in HDR content. This study shows that tone expansion of stylized content takes the form of gamma correction and we propose a method that adapts the gamma value to the style of the video. We validate our method through a subjective evaluation against state-of-the-art methods. Furthermore, our work has been oriented for 1000 nits HDR displays and we present a framework positioning our method in conformance with existing SDR standards and upcoming HDR TV standards.
{"title":"Style Aware Tone Expansion for HDR Displays","authors":"Cambodge Bist, R. Cozot, G. Madec, X. Ducloux","doi":"10.20380/GI2016.08","DOIUrl":"https://doi.org/10.20380/GI2016.08","url":null,"abstract":"The vast majority of video content existing today is in Standard Dynamic Range (SDR) format and there is a strong interest in upscaling this content for upcoming High Dynamic Range (HDR) displays. Tone expansion or inverse tone mapping converts SDR content into HDR format using Expansion Operators (EO). In this paper, we show that current EO's do not perform as well when dealing with content of various lighting style aesthetics. In addition to this, we present a series of perceptual user studies evaluating user preference for lighting style in HDR content. This study shows that tone expansion of stylized content takes the form of gamma correction and we propose a method that adapts the gamma value to the style of the video. We validate our method through a subjective evaluation against state-of-the-art methods. Furthermore, our work has been oriented for 1000 nits HDR displays and we present a framework positioning our method in conformance with existing SDR standards and upcoming HDR TV standards.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"46 1","pages":"57-63"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90702585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visualizing high-dimensional labeled data on a two-dimensional plane can quickly result in visual clutter and information overload. To address this problem, the data usually needs to be structured, so that only parts of it are displayed at a time. We present a hierarchy-based approach that projects labeled data on different levels of detail on a two-dimensional plane, whilst keeping the user's cognitive load between the level changes as low as possible. The approach consists of three steps: First, the data is hierarchically clustered; second, the user can determine levels of detail; third, the levels of detail are visualized one at a time on a two-dimensional plane. Animations make transitions between the levels of detail traceable, while the exploration on each level is supported by several interaction techniques. We demonstrate the applicability and usefulness of the approach with use cases from the patent domain and a question-and-answer website.
{"title":"Visual Clutter Reduction through Hierarchy-based Projection of High-dimensional Labeled Data","authors":"Dominik Herr, Qi Han, S. Lohmann, T. Ertl","doi":"10.20380/GI2016.14","DOIUrl":"https://doi.org/10.20380/GI2016.14","url":null,"abstract":"Visualizing high-dimensional labeled data on a two-dimensional plane can quickly result in visual clutter and information overload. To address this problem, the data usually needs to be structured, so that only parts of it are displayed at a time. We present a hierarchy-based approach that projects labeled data on different levels of detail on a two-dimensional plane, whilst keeping the user's cognitive load between the level changes as low as possible. The approach consists of three steps: First, the data is hierarchically clustered; second, the user can determine levels of detail; third, the levels of detail are visualized one at a time on a two-dimensional plane. Animations make transitions between the levels of detail traceable, while the exploration on each level is supported by several interaction techniques. We demonstrate the applicability and usefulness of the approach with use cases from the patent domain and a question-and-answer website.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"74 1","pages":"109-116"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80622924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deepika Vaddi, P. Dugas, I. Dolgov, Rina R. Wehbe, L. Nacke
Cooperative communication mechanics, such as avatar gestures or in-game visual pointers, enable player collaboration directly through gameplay. We currently lack a deeper understanding of how players use cooperative communication mechanics, and whether they can effectively supplement or even supplant traditional voice and chat communication. The present research investigated player communication in Portal 2 by testing the game's native cooperative communication mechanics for dyads of players in custom test chambers. Following our initial hypothesis, players functioned best when they had access to both cooperative communication mechanics and voice. We found that players preferred voice communication, but perceived cooperative communication mechanics as necessary to coordinate interdependent actions.
{"title":"Investigating the Impact of Cooperative Communication Mechanics on Player Performance in Portal 2","authors":"Deepika Vaddi, P. Dugas, I. Dolgov, Rina R. Wehbe, L. Nacke","doi":"10.20380/GI2016.06","DOIUrl":"https://doi.org/10.20380/GI2016.06","url":null,"abstract":"Cooperative communication mechanics, such as avatar gestures or in-game visual pointers, enable player collaboration directly through gameplay. We currently lack a deeper understanding of how players use cooperative communication mechanics, and whether they can effectively supplement or even supplant traditional voice and chat communication. The present research investigated player communication in Portal 2 by testing the game's native cooperative communication mechanics for dyads of players in custom test chambers. Following our initial hypothesis, players functioned best when they had access to both cooperative communication mechanics and voice. We found that players preferred voice communication, but perceived cooperative communication mechanics as necessary to coordinate interdependent actions.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"63 1","pages":"41-48"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79156031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Partition of Unity Parametrics (PUPs) are a generalization of NURBS that allow us to use arbitrary basis functions for modeling parametric curves and surfaces. One interesting problem is finding subdivision schemes for this recently developed and flexible class of parametrics. In this paper, we introduce a systematic approach for determining uniform subdivision of PUPs curves and tensorproduct surfaces. Our approach formulates PUPs subdivision as a least squares problem, which enables us to find exact subdivision filters for refinable basis functions and optimal approximate schemes for irrefinable ones. To illustrate this approach, we provide sample subdivision schemes with different properties, which are further demonstrated by presenting various examples.
{"title":"A Subdivision Framework for Partition of Unity Parametrics","authors":"Amirhessam Moltaji, Adam Runions, F. Samavati","doi":"10.20380/GI2016.04","DOIUrl":"https://doi.org/10.20380/GI2016.04","url":null,"abstract":"Partition of Unity Parametrics (PUPs) are a generalization of NURBS that allow us to use arbitrary basis functions for modeling parametric curves and surfaces. One interesting problem is finding subdivision schemes for this recently developed and flexible class of parametrics. In this paper, we introduce a systematic approach for determining uniform subdivision of PUPs curves and tensorproduct surfaces. Our approach formulates PUPs subdivision as a least squares problem, which enables us to find exact subdivision filters for refinable basis functions and optimal approximate schemes for irrefinable ones. To illustrate this approach, we provide sample subdivision schemes with different properties, which are further demonstrated by presenting various examples.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"9 1","pages":"21-31"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82130765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose an algorithm for closed and smooth 3D surface reconstruction from unorganized planar cross sections. We address the problem in its full generality, and show its effectiveness on sparse set of cutting planes. Our algorithm is based on the construction of a globally consistent signed distance function over the cutting planes. It uses a split-and-merge approach utilising Hermite mean-value interpolation for triangular meshes. This work improvises on recent approaches by providing a simplified construction that avoids need for post-processing to smooth the reconstructed object boundary. We provide results of reconstruction and its comparison with other algorithms.
{"title":"3D Surface Reconstruction from Unorganized Sparse Cross Sections","authors":"Ojaswa Sharma, Nidhi Agarwal","doi":"10.20380/GI2016.05","DOIUrl":"https://doi.org/10.20380/GI2016.05","url":null,"abstract":"In this paper, we propose an algorithm for closed and smooth 3D surface reconstruction from unorganized planar cross sections. We address the problem in its full generality, and show its effectiveness on sparse set of cutting planes. Our algorithm is based on the construction of a globally consistent signed distance function over the cutting planes. It uses a split-and-merge approach utilising Hermite mean-value interpolation for triangular meshes. This work improvises on recent approaches by providing a simplified construction that avoids need for post-processing to smooth the reconstructed object boundary. We provide results of reconstruction and its comparison with other algorithms.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"300 1","pages":"33-40"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78333434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}