For hover-based UIs (e.g., pop-up windows) and scrollable UIs, we investigated mouse-pointing performance for users trying to avoid overshooting a target while aiming for it. Three experiments were conducted with a 1D pointing task in which overshooting was accepted (a) within a temporal delay, (b) via a spatial gap between the target and an unintended item, and (c) with both a delay and a gap. We found that, in general, movement times tended to increase with a shorter delay and a smaller gap if these parameters were independently tested. Therefore, Fitts’ law cannot accurately predict the movement times when various values of delay and/or gap are used. We found that 800 ms is required to remove negative effects of distractor for densely arranged targets, but we found no optimal gap.
{"title":"Evaluating Temporal Delays and Spatial Gaps in Overshoot-avoiding Mouse-pointing Operations","authors":"Shota Yamanaka","doi":"10.20380/GI2020.44","DOIUrl":"https://doi.org/10.20380/GI2020.44","url":null,"abstract":"For hover-based UIs (e.g., pop-up windows) and scrollable UIs, we investigated mouse-pointing performance for users trying to avoid overshooting a target while aiming for it. Three experiments were conducted with a 1D pointing task in which overshooting was accepted (a) within a temporal delay, (b) via a spatial gap between the target and an unintended item, and (c) with both a delay and a gap. We found that, in general, movement times tended to increase with a shorter delay and a smaller gap if these parameters were independently tested. Therefore, Fitts’ law cannot accurately predict the movement times when various values of delay and/or gap are used. We found that 800 ms is required to remove negative effects of distractor for densely arranged targets, but we found no optimal gap.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"440-451"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43596923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.
{"title":"Evaluation of Body-Referenced Graphical Menus in Virtual Environments","authors":"Irina Lediaeva, J. Laviola","doi":"10.20380/GI2020.31","DOIUrl":"https://doi.org/10.20380/GI2020.31","url":null,"abstract":"Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"308-316"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42314618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The steering law is a model for predicting the time and speed for passing through a constrained path. When people can view only a limited range of the path forward, they limit their speed in preparation of possibly needing to turn at a corner. However, few studies have focused on how limited views affect steering performance, and no quantitative models have been established. The results of a mouse steering study showed that speed was linearly limited by the path width and was limited by the square root of the viewable forward distance. While a baseline model showed an adjusted R2 = 0.144 for predicting the speed, our best-fit model showed an adjusted R2 = 0.975 with only one additional coefficient, demonstrating a comparatively high prediction accuracy for given viewable forward distances.
{"title":"Peephole Steering: Speed Limitation Models for Steering Performance in Restricted View Sizes","authors":"Shota Yamanaka, Hiroki Usuba, Haruki Takahashi, Homei Miyashita","doi":"10.20380/GI2020.46","DOIUrl":"https://doi.org/10.20380/GI2020.46","url":null,"abstract":"The steering law is a model for predicting the time and speed for passing through a constrained path. When people can view only a limited range of the path forward, they limit their speed in preparation of possibly needing to turn at a corner. However, few studies have focused on how limited views affect steering performance, and no quantitative models have been established. The results of a mouse steering study showed that speed was linearly limited by the path width and was limited by the square root of the viewable forward distance. While a baseline model showed an adjusted R2 = 0.144 for predicting the speed, our best-fit model showed an adjusted R2 = 0.975 with only one additional coefficient, demonstrating a comparatively high prediction accuracy for given viewable forward distances.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"461-469"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49649559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minsuk Chang, B. Lafreniere, Juho Kim, G. Fitzmaurice, Tovi Grossman
This paper introduces Workflow graphs , or W-graphs , which encode how the approaches taken by multiple users performing a fixed 3D design task converge and diverge from one another. The graph’s nodes represent equivalent intermediate task states across users, and directed edges represent how a user moved between these states, inferred from screen recording videos, command log data, and task content history. The result is a data structure that captures alternative methods for performing sub-tasks (e.g., modeling the legs of a chair) and alternative strategies of the overall task. As a case study, we describe and exemplify a computational pipeline for building W-graphs using screen recordings, command logs, and 3D model snapshots from an instrumented version of the Tinkercad 3D modeling application, and present graphs built for two sample tasks. We also illustrate how W-graphs can facilitate novel user interfaces with scenarios in workflow feedback, on-demand task guidance, and instructor dashboards.
{"title":"Workflow Graphs: A Computational Model of Collective Task Strategies for 3D Design Software","authors":"Minsuk Chang, B. Lafreniere, Juho Kim, G. Fitzmaurice, Tovi Grossman","doi":"10.20380/GI2020.13","DOIUrl":"https://doi.org/10.20380/GI2020.13","url":null,"abstract":"This paper introduces Workflow graphs , or W-graphs , which encode how the approaches taken by multiple users performing a fixed 3D design task converge and diverge from one another. The graph’s nodes represent equivalent intermediate task states across users, and directed edges represent how a user moved between these states, inferred from screen recording videos, command log data, and task content history. The result is a data structure that captures alternative methods for performing sub-tasks (e.g., modeling the legs of a chair) and alternative strategies of the overall task. As a case study, we describe and exemplify a computational pipeline for building W-graphs using screen recordings, command logs, and 3D model snapshots from an instrumented version of the Tinkercad 3D modeling application, and present graphs built for two sample tasks. We also illustrate how W-graphs can facilitate novel user interfaces with scenarios in workflow feedback, on-demand task guidance, and instructor dashboards.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"18 1","pages":"114-124"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91277485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aristides Mairena, M. Dechant, C. Gutwin, A. Cockburn
Emphasis effects – visual changes that make certain elements more prominent – are commonly used in information visualization to draw the user’s attention or to indicate importance. Although theoretical frameworks of emphasis exist (that link visually diverse emphasis effects through the idea of visual prominence compared to background elements), most metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. In particular, it is difficult for designers to know, when designing a visualization, how different emphasis effects will compare and how to ensure that the user’s experience with one effect will be similar to that with another. To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. Results from gaze tracking, mouse clicks, and subjective responses in our first study show that there are significant differences between different kinds of effects and between levels. Our second study tested the effects in realistic visualizations taken from the MASSVIS dataset, and saw similar results. We developed a simple predictive model from the data in our first study, and used it to predict the results in the second; the model was accurate, with high correlations between predictions and real values. Our studies and empirical models provide new information for designers who want to understand how emphasis effects will be perceived by users.
{"title":"A Baseline Study of Emphasis Effects in Information Visualization","authors":"Aristides Mairena, M. Dechant, C. Gutwin, A. Cockburn","doi":"10.20380/GI2020.33","DOIUrl":"https://doi.org/10.20380/GI2020.33","url":null,"abstract":"Emphasis effects – visual changes that make certain elements more prominent – are commonly used in information visualization to draw the user’s attention or to indicate importance. Although theoretical frameworks of emphasis exist (that link visually diverse emphasis effects through the idea of visual prominence compared to background elements), most metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. In particular, it is difficult for designers to know, when designing a visualization, how different emphasis effects will compare and how to ensure that the user’s experience with one effect will be similar to that with another. To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. Results from gaze tracking, mouse clicks, and subjective responses in our first study show that there are significant differences between different kinds of effects and between levels. Our second study tested the effects in realistic visualizations taken from the MASSVIS dataset, and saw similar results. We developed a simple predictive model from the data in our first study, and used it to predict the results in the second; the model was accurate, with high correlations between predictions and real values. Our studies and empirical models provide new information for designers who want to understand how emphasis effects will be perceived by users.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"28 1","pages":"327-339"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91317307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. Extensive experiments show that our model learns to generate reasonable 3D models and animations.
{"title":"Generation of 3D Human Models and Animations Using Simple Sketches","authors":"Alican Akman, Y. Sahillioğlu, T. M. Sezgin","doi":"10.20380/GI2020.05","DOIUrl":"https://doi.org/10.20380/GI2020.05","url":null,"abstract":"Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. Extensive experiments show that our model learns to generate reasonable 3D models and animations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"28-36"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42750682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shota Yamanaka, Tung D. Ta, K. Tsubouchi, Fuminori Okuya, Kenji Tsushio, Kunihiro Kato, Y. Kawahara
Personal identification numbers (PINs) and grid patterns have been used for user authentication, such as for unlocking smartphones. However, they carry the risk that attackers will learn the PINs and patterns by shoulder surfing. We propose a secure authentication method called SheetKey that requires complicated and quick touch inputs that can only be accomplished with a sheet that has a pattern printed with conductive ink. Using SheetKey, users can input a complicated combination of touch events within 0.3 s by just swiping the pad of their finger on the sheet. We investigated the requirements for producing SheetKeys, e.g., the optimal disc diameter for generating touch events. In a user study, 13 participants passed through authentication by using SheetKeys at success rates of 78–87%, while attackers using manual inputs had success rates of 0–27%. We also discuss the degree of complexity based on entropy and further improvements, e.g., entering passwords on alphabetical keyboards.
{"title":"SheetKey: Generating Touch Events by a Pattern Printed with Conductive Ink for User Authentication","authors":"Shota Yamanaka, Tung D. Ta, K. Tsubouchi, Fuminori Okuya, Kenji Tsushio, Kunihiro Kato, Y. Kawahara","doi":"10.20380/GI2020.45","DOIUrl":"https://doi.org/10.20380/GI2020.45","url":null,"abstract":"Personal identification numbers (PINs) and grid patterns have been used for user authentication, such as for unlocking smartphones. However, they carry the risk that attackers will learn the PINs and patterns by shoulder surfing. We propose a secure authentication method called SheetKey that requires complicated and quick touch inputs that can only be accomplished with a sheet that has a pattern printed with conductive ink. Using SheetKey, users can input a complicated combination of touch events within 0.3 s by just swiping the pad of their finger on the sheet. We investigated the requirements for producing SheetKeys, e.g., the optimal disc diameter for generating touch events. In a user study, 13 participants passed through authentication by using SheetKeys at success rates of 78–87%, while attackers using manual inputs had success rates of 0–27%. We also discuss the degree of complexity based on entropy and further improvements, e.g., entering passwords on alphabetical keyboards.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"452-460"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47012291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nils Rodrigues, C. Schulz, Antoine Lhuillier, D. Weiskopf
We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our clusterflow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on A*. It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).
{"title":"Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces","authors":"Nils Rodrigues, C. Schulz, Antoine Lhuillier, D. Weiskopf","doi":"10.20380/GI2020.38","DOIUrl":"https://doi.org/10.20380/GI2020.38","url":null,"abstract":"We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our clusterflow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on A*. It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"382-392"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45515086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles-Olivier Dufresne Camaro, Fanny Chevalier, Syed Ishtiaque Ahmed
We present a study of recent advances in computer vision (CV) research for the Global South to identify the main uses of modern CV and its most significant ethical risks in the region. We review 55 research papers and analyze them along three principal dimensions: where the technology was designed, the needs addressed by the technology, and the potential ethical risks arising following deployment. Results suggest: 1) CV is most used in policy planning and surveillance applications, 2) privacy violations is the most likely and most severe risk to arise from modern CV systems designed for the Global South, and 3) researchers from the Global North differ from researchers from the Global South in their uses of CV to solve problems in the Global South. Results of our risk analysis also differ from previous work on CV risk perception in the West, suggesting locality to be a critical component of each risk’s importance.
{"title":"Computer Vision Applications and their Ethical Risks in the Global South","authors":"Charles-Olivier Dufresne Camaro, Fanny Chevalier, Syed Ishtiaque Ahmed","doi":"10.20380/GI2020.17","DOIUrl":"https://doi.org/10.20380/GI2020.17","url":null,"abstract":"We present a study of recent advances in computer vision (CV) research for the Global South to identify the main uses of modern CV and its most significant ethical risks in the region. We review 55 research papers and analyze them along three principal dimensions: where the technology was designed, the needs addressed by the technology, and the potential ethical risks arising following deployment. Results suggest: 1) CV is most used in policy planning and surveillance applications, 2) privacy violations is the most likely and most severe risk to arise from modern CV systems designed for the Global South, and 3) researchers from the Global North differ from researchers from the Global South in their uses of CV to solve problems in the Global South. Results of our risk analysis also differ from previous work on CV risk perception in the West, suggesting locality to be a critical component of each risk’s importance.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"158-167"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41372962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Professional training often requires need-based scheduling and observation-based assessment. In this paper, we present a visualization platform for managing such training data in a medical education domain, where the learners are resident physicians and the educators are certified doctors. The system was developed through four focus groups with the residents and their educators over six major development iterations. We present how the professionals involved, nature of training, choice of the display devices, and the overall assessment process influenced the design of the visualizations. The final system was deployed as a web tool for the department of emergency medicine, and evaluated by both the residents and their educators in an uncontrolled longitudinal study. Our analysis of four months of user logs revealed interesting usage patterns consistent with real-life training events and showed an improvement in several key learning metrics when compared to historical values during the same study period. The users’ feedback showed that both educators and residents found our system to be helpful in real-life decision making. *e-mail: venkat.bandi@usask.ca †e-mail: d.mondal@usask.ca ‡e-mail: brent.thoma@usask.ca
{"title":"Scope and Impact of Visualization in Training Professionals in Academic Medicine","authors":"V. Bandi, Debajyoti Mondal, B. Thoma","doi":"10.20380/GI2020.10","DOIUrl":"https://doi.org/10.20380/GI2020.10","url":null,"abstract":"Professional training often requires need-based scheduling and observation-based assessment. In this paper, we present a visualization platform for managing such training data in a medical education domain, where the learners are resident physicians and the educators are certified doctors. The system was developed through four focus groups with the residents and their educators over six major development iterations. We present how the professionals involved, nature of training, choice of the display devices, and the overall assessment process influenced the design of the visualizations. The final system was deployed as a web tool for the department of emergency medicine, and evaluated by both the residents and their educators in an uncontrolled longitudinal study. Our analysis of four months of user logs revealed interesting usage patterns consistent with real-life training events and showed an improvement in several key learning metrics when compared to historical values during the same study period. The users’ feedback showed that both educators and residents found our system to be helpful in real-life decision making. *e-mail: venkat.bandi@usask.ca †e-mail: d.mondal@usask.ca ‡e-mail: brent.thoma@usask.ca","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"84-94"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43662126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}