首页 > 最新文献

Proceedings. Graphics Interface (Conference)最新文献

英文 中文
Evaluating Temporal Delays and Spatial Gaps in Overshoot-avoiding Mouse-pointing Operations 评估避免超调鼠标指向操作中的时间延迟和空间间隙
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.44
Shota Yamanaka
For hover-based UIs (e.g., pop-up windows) and scrollable UIs, we investigated mouse-pointing performance for users trying to avoid overshooting a target while aiming for it. Three experiments were conducted with a 1D pointing task in which overshooting was accepted (a) within a temporal delay, (b) via a spatial gap between the target and an unintended item, and (c) with both a delay and a gap. We found that, in general, movement times tended to increase with a shorter delay and a smaller gap if these parameters were independently tested. Therefore, Fitts’ law cannot accurately predict the movement times when various values of delay and/or gap are used. We found that 800 ms is required to remove negative effects of distractor for densely arranged targets, but we found no optimal gap.
对于基于悬停的UI(例如,弹出窗口)和可滚动UI,我们研究了用户在瞄准目标时试图避免超调的鼠标指向性能。在1D指向任务中进行了三个实验,其中超调被接受(a)在时间延迟内,(b)通过目标和意外项目之间的空间间隙,以及(c)同时具有延迟和间隙。我们发现,如果对这些参数进行独立测试,一般来说,运动时间往往会随着更短的延迟和更小的间隙而增加。因此,当使用各种延迟和/或间隙值时,Fitts定律不能准确预测移动时间。我们发现,对于密集排列的目标,需要800ms来消除干扰物的负面影响,但我们没有发现最佳间隙。
{"title":"Evaluating Temporal Delays and Spatial Gaps in Overshoot-avoiding Mouse-pointing Operations","authors":"Shota Yamanaka","doi":"10.20380/GI2020.44","DOIUrl":"https://doi.org/10.20380/GI2020.44","url":null,"abstract":"For hover-based UIs (e.g., pop-up windows) and scrollable UIs, we investigated mouse-pointing performance for users trying to avoid overshooting a target while aiming for it. Three experiments were conducted with a 1D pointing task in which overshooting was accepted (a) within a temporal delay, (b) via a spatial gap between the target and an unintended item, and (c) with both a delay and a gap. We found that, in general, movement times tended to increase with a shorter delay and a smaller gap if these parameters were independently tested. Therefore, Fitts’ law cannot accurately predict the movement times when various values of delay and/or gap are used. We found that 800 ms is required to remove negative effects of distractor for densely arranged targets, but we found no optimal gap.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"440-451"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43596923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evaluation of Body-Referenced Graphical Menus in Virtual Environments 虚拟环境中人体参考图形菜单的评价
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.31
Irina Lediaeva, J. Laviola
Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.
图形菜单已被广泛用于桌面应用程序,并被广泛采用和集成到虚拟环境(VE)中。然而,尽管桌面菜单得到了很好的评估和建立,但VEs中采用的2D菜单仍然缺乏彻底的评估。在本文中,我们介绍了在虚拟环境中对身体参考图形菜单进行全面研究的结果。我们将菜单位置(空间、手臂、手和腰部)与各种形状(线性和径向)和选择技术(使用控制器设备、头部和眼睛凝视的光线投射)进行比较。我们检查了每种情况下的任务完成时间、错误率、目标重新输入的数量和用户偏好,并为空间、手臂、手和腰部图形菜单提供了设计建议。我们的结果表明,空间、手部和腰部菜单明显快于手臂菜单,并且眼睛凝视选择技术比其他选择技术更容易出错,并且目标重新输入的次数明显更高。此外,我们发现,更多的参与者将空间图形菜单列为他们最喜欢的菜单位置,将手臂菜单列为最不喜欢的菜单。
{"title":"Evaluation of Body-Referenced Graphical Menus in Virtual Environments","authors":"Irina Lediaeva, J. Laviola","doi":"10.20380/GI2020.31","DOIUrl":"https://doi.org/10.20380/GI2020.31","url":null,"abstract":"Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"308-316"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42314618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Peephole Steering: Speed Limitation Models for Steering Performance in Restricted View Sizes Peephole转向:受限视图尺寸下转向性能的速度限制模型
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.46
Shota Yamanaka, Hiroki Usuba, Haruki Takahashi, Homei Miyashita
The steering law is a model for predicting the time and speed for passing through a constrained path. When people can view only a limited range of the path forward, they limit their speed in preparation of possibly needing to turn at a corner. However, few studies have focused on how limited views affect steering performance, and no quantitative models have been established. The results of a mouse steering study showed that speed was linearly limited by the path width and was limited by the square root of the viewable forward distance. While a baseline model showed an adjusted R2 = 0.144 for predicting the speed, our best-fit model showed an adjusted R2 = 0.975 with only one additional coefficient, demonstrating a comparatively high prediction accuracy for given viewable forward distances.
转向规律是用于预测通过受约束路径的时间和速度的模型。当人们只能看到有限的前进道路时,他们会限制自己的速度,为可能需要在拐角处转弯做准备。然而,很少有研究关注有限的观点如何影响转向性能,也没有建立定量模型。鼠标操纵研究的结果表明,速度受到路径宽度的线性限制,并受到可视向前距离的平方根的限制。虽然基线模型显示用于预测速度的调整后的R2=0.144,但我们的最佳拟合模型显示仅具有一个附加系数的调整后R2=0.975,表明对于给定的可视前方距离具有相对较高的预测精度。
{"title":"Peephole Steering: Speed Limitation Models for Steering Performance in Restricted View Sizes","authors":"Shota Yamanaka, Hiroki Usuba, Haruki Takahashi, Homei Miyashita","doi":"10.20380/GI2020.46","DOIUrl":"https://doi.org/10.20380/GI2020.46","url":null,"abstract":"The steering law is a model for predicting the time and speed for passing through a constrained path. When people can view only a limited range of the path forward, they limit their speed in preparation of possibly needing to turn at a corner. However, few studies have focused on how limited views affect steering performance, and no quantitative models have been established. The results of a mouse steering study showed that speed was linearly limited by the path width and was limited by the square root of the viewable forward distance. While a baseline model showed an adjusted R2 = 0.144 for predicting the speed, our best-fit model showed an adjusted R2 = 0.975 with only one additional coefficient, demonstrating a comparatively high prediction accuracy for given viewable forward distances.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"461-469"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49649559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Workflow Graphs: A Computational Model of Collective Task Strategies for 3D Design Software 工作流图:三维设计软件集体任务策略的计算模型
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.13
Minsuk Chang, B. Lafreniere, Juho Kim, G. Fitzmaurice, Tovi Grossman
This paper introduces Workflow graphs , or W-graphs , which encode how the approaches taken by multiple users performing a fixed 3D design task converge and diverge from one another. The graph’s nodes represent equivalent intermediate task states across users, and directed edges represent how a user moved between these states, inferred from screen recording videos, command log data, and task content history. The result is a data structure that captures alternative methods for performing sub-tasks (e.g., modeling the legs of a chair) and alternative strategies of the overall task. As a case study, we describe and exemplify a computational pipeline for building W-graphs using screen recordings, command logs, and 3D model snapshots from an instrumented version of the Tinkercad 3D modeling application, and present graphs built for two sample tasks. We also illustrate how W-graphs can facilitate novel user interfaces with scenarios in workflow feedback, on-demand task guidance, and instructor dashboards.
本文介绍了工作流图,或w图,它编码了执行固定3D设计任务的多个用户所采用的方法如何相互收敛和偏离。图的节点表示跨用户的等效中间任务状态,有向边表示用户如何在这些状态之间移动,这是从屏幕录制视频、命令日志数据和任务内容历史中推断出来的。结果是一个数据结构,它捕获了执行子任务的替代方法(例如,为椅子的腿建模)和整个任务的替代策略。作为案例研究,我们描述并举例说明了一个计算管道,该管道使用来自Tinkercad 3D建模应用程序的仪器化版本的屏幕记录、命令日志和3D模型快照来构建w图,并展示了为两个示例任务构建的图形。我们还说明了w -graph如何利用工作流反馈、按需任务指导和教员指示板中的场景促进新颖的用户界面。
{"title":"Workflow Graphs: A Computational Model of Collective Task Strategies for 3D Design Software","authors":"Minsuk Chang, B. Lafreniere, Juho Kim, G. Fitzmaurice, Tovi Grossman","doi":"10.20380/GI2020.13","DOIUrl":"https://doi.org/10.20380/GI2020.13","url":null,"abstract":"This paper introduces Workflow graphs , or W-graphs , which encode how the approaches taken by multiple users performing a fixed 3D design task converge and diverge from one another. The graph’s nodes represent equivalent intermediate task states across users, and directed edges represent how a user moved between these states, inferred from screen recording videos, command log data, and task content history. The result is a data structure that captures alternative methods for performing sub-tasks (e.g., modeling the legs of a chair) and alternative strategies of the overall task. As a case study, we describe and exemplify a computational pipeline for building W-graphs using screen recordings, command logs, and 3D model snapshots from an instrumented version of the Tinkercad 3D modeling application, and present graphs built for two sample tasks. We also illustrate how W-graphs can facilitate novel user interfaces with scenarios in workflow feedback, on-demand task guidance, and instructor dashboards.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"18 1","pages":"114-124"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91277485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Baseline Study of Emphasis Effects in Information Visualization 信息可视化中强调效应的基线研究
Pub Date : 2020-01-25 DOI: 10.20380/GI2020.33
Aristides Mairena, M. Dechant, C. Gutwin, A. Cockburn
Emphasis effects – visual changes that make certain elements more prominent – are commonly used in information visualization to draw the user’s attention or to indicate importance. Although theoretical frameworks of emphasis exist (that link visually diverse emphasis effects through the idea of visual prominence compared to background elements), most metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. In particular, it is difficult for designers to know, when designing a visualization, how different emphasis effects will compare and how to ensure that the user’s experience with one effect will be similar to that with another. To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. Results from gaze tracking, mouse clicks, and subjective responses in our first study show that there are significant differences between different kinds of effects and between levels. Our second study tested the effects in realistic visualizations taken from the MASSVIS dataset, and saw similar results. We developed a simple predictive model from the data in our first study, and used it to predict the results in the second; the model was accurate, with high correlations between predictions and real values. Our studies and empirical models provide new information for designers who want to understand how emphasis effects will be perceived by users.
强调效果——使某些元素更加突出的视觉变化——通常用于信息可视化,以吸引用户的注意力或表明重要性。虽然存在强调的理论框架(通过视觉突出与背景元素的比较,将视觉上不同的强调效果联系起来),但预测用户如何感知强调效果的大多数指标都来自人类视觉的抽象模型,这可能不适用于可视化设计。特别是,设计师在设计可视化时,很难知道不同的强调效果将如何比较,以及如何确保一种效果的用户体验与另一种效果相似。为了解决这一差距,我们进行了两项研究,提供了关于用户如何感知不同强调效果的经验证据,使用了三个视觉变量(颜色、大小和模糊/聚焦)和八个强度水平。在我们的第一项研究中,凝视跟踪、鼠标点击和主观反应的结果表明,不同类型的影响和不同水平之间存在显著差异。我们的第二项研究测试了来自MASSVIS数据集的逼真可视化效果,并看到了类似的结果。我们根据第一项研究的数据建立了一个简单的预测模型,并用它来预测第二项研究的结果;该模型是准确的,预测与实际值之间具有高度相关性。我们的研究和经验模型为想要了解用户如何感知强调效果的设计师提供了新的信息。
{"title":"A Baseline Study of Emphasis Effects in Information Visualization","authors":"Aristides Mairena, M. Dechant, C. Gutwin, A. Cockburn","doi":"10.20380/GI2020.33","DOIUrl":"https://doi.org/10.20380/GI2020.33","url":null,"abstract":"Emphasis effects – visual changes that make certain elements more prominent – are commonly used in information visualization to draw the user’s attention or to indicate importance. Although theoretical frameworks of emphasis exist (that link visually diverse emphasis effects through the idea of visual prominence compared to background elements), most metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. In particular, it is difficult for designers to know, when designing a visualization, how different emphasis effects will compare and how to ensure that the user’s experience with one effect will be similar to that with another. To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. Results from gaze tracking, mouse clicks, and subjective responses in our first study show that there are significant differences between different kinds of effects and between levels. Our second study tested the effects in realistic visualizations taken from the MASSVIS dataset, and saw similar results. We developed a simple predictive model from the data in our first study, and used it to predict the results in the second; the model was accurate, with high correlations between predictions and real values. Our studies and empirical models provide new information for designers who want to understand how emphasis effects will be perceived by users.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"28 1","pages":"327-339"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91317307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Generation of 3D Human Models and Animations Using Simple Sketches 生成3D人体模型和动画使用简单的草图
Pub Date : 2020-01-25 DOI: 10.20380/GI2020.05
Alican Akman, Y. Sahillioğlu, T. M. Sezgin
Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. Extensive experiments show that our model learns to generate reasonable 3D models and animations.
从二维图像或草图生成三维模型是计算机图形学中一个被广泛研究的重要问题。我们描述了第一种从单个草图棒状图形生成三维人体模型的方法。与现有的人体建模技术相比,我们的方法既不需要统计体型模型,也不需要装配的三维人物模型。我们利用变分自动编码器开发了一种新的框架,能够从简单的2D条形草图过渡到相应的3D人体模型。我们的网络学习输入草图和输出三维模型之间的映射。此外,我们的模型学习了这些模型周围的嵌入空间。我们证明,我们的网络不仅可以生成3D模型,还可以通过在学习的嵌入空间中进行插值和外推来生成3D动画。大量实验表明,我们的模型学会了生成合理的三维模型和动画。
{"title":"Generation of 3D Human Models and Animations Using Simple Sketches","authors":"Alican Akman, Y. Sahillioğlu, T. M. Sezgin","doi":"10.20380/GI2020.05","DOIUrl":"https://doi.org/10.20380/GI2020.05","url":null,"abstract":"Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. Extensive experiments show that our model learns to generate reasonable 3D models and animations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"28-36"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42750682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
SheetKey: Generating Touch Events by a Pattern Printed with Conductive Ink for User Authentication SheetKey:通过导电墨水打印的模式生成触摸事件,用于用户身份验证
Pub Date : 2020-01-25 DOI: 10.20380/GI2020.45
Shota Yamanaka, Tung D. Ta, K. Tsubouchi, Fuminori Okuya, Kenji Tsushio, Kunihiro Kato, Y. Kawahara
Personal identification numbers (PINs) and grid patterns have been used for user authentication, such as for unlocking smartphones. However, they carry the risk that attackers will learn the PINs and patterns by shoulder surfing. We propose a secure authentication method called SheetKey that requires complicated and quick touch inputs that can only be accomplished with a sheet that has a pattern printed with conductive ink. Using SheetKey, users can input a complicated combination of touch events within 0.3 s by just swiping the pad of their finger on the sheet. We investigated the requirements for producing SheetKeys, e.g., the optimal disc diameter for generating touch events. In a user study, 13 participants passed through authentication by using SheetKeys at success rates of 78–87%, while attackers using manual inputs had success rates of 0–27%. We also discuss the degree of complexity based on entropy and further improvements, e.g., entering passwords on alphabetical keyboards.
个人识别码(PIN)和网格模式已被用于用户身份验证,例如解锁智能手机。然而,它们具有攻击者通过肩部冲浪来学习PIN码和模式的风险。我们提出了一种称为SheetKey的安全身份验证方法,该方法需要复杂而快速的触摸输入,而这只能通过使用导电墨水打印图案的纸张来完成。使用SheetKey,用户只需在纸上滑动手指垫,就可以在0.3秒内输入复杂的触摸事件组合。我们研究了生产SheetKeys的要求,例如,生成触摸事件的最佳圆盘直径。在一项用户研究中,13名参与者使用SheetKeys通过身份验证,成功率为78–87%,而使用手动输入的攻击者成功率为0–27%。我们还讨论了基于熵的复杂性和进一步的改进,例如,在字母键盘上输入密码。
{"title":"SheetKey: Generating Touch Events by a Pattern Printed with Conductive Ink for User Authentication","authors":"Shota Yamanaka, Tung D. Ta, K. Tsubouchi, Fuminori Okuya, Kenji Tsushio, Kunihiro Kato, Y. Kawahara","doi":"10.20380/GI2020.45","DOIUrl":"https://doi.org/10.20380/GI2020.45","url":null,"abstract":"Personal identification numbers (PINs) and grid patterns have been used for user authentication, such as for unlocking smartphones. However, they carry the risk that attackers will learn the PINs and patterns by shoulder surfing. We propose a secure authentication method called SheetKey that requires complicated and quick touch inputs that can only be accomplished with a sheet that has a pattern printed with conductive ink. Using SheetKey, users can input a complicated combination of touch events within 0.3 s by just swiping the pad of their finger on the sheet. We investigated the requirements for producing SheetKeys, e.g., the optimal disc diameter for generating touch events. In a user study, 13 participants passed through authentication by using SheetKeys at success rates of 78–87%, while attackers using manual inputs had success rates of 0–27%. We also discuss the degree of complexity based on entropy and further improvements, e.g., entering passwords on alphabetical keyboards.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"452-460"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47012291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces 簇流平行坐标:跨子空间跟踪簇
Pub Date : 2020-01-25 DOI: 10.20380/GI2020.38
Nils Rodrigues, C. Schulz, Antoine Lhuillier, D. Weiskopf
We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our clusterflow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on A*. It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).
我们提出了一种新的平行坐标图(PCP)变体,其中我们在多元数据的2D子空间中显示聚类,并强调它们之间的流动。我们通过垂直复制和堆叠单个轴来实现这一点。在高层次上,我们的集群流布局显示了数据点如何在不同的子空间中从一个集群移动到另一个集群。我们通过减少每个重复轴的可用垂直空间来实现基于聚类的捆绑和限制地块增长。尽管我们引入了聚类之间的空间,但我们通过从规则PCP的原始斜率开始和结束,并在其间绘制埃尔米特样条线段,来保持聚类内相关性的可读性。此外,我们的渲染技术可以实现小型和大型数据集的可视化。集群流PCP甚至可以通过管道的布局和渲染阶段传播模糊集群所固有的不确定性。我们的布局算法基于A*。它实现了关于一组新的成本函数的最佳结果,这些成本函数允许我们水平排列轴(维度排序)和垂直排列轴(聚类排序)。
{"title":"Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces","authors":"Nils Rodrigues, C. Schulz, Antoine Lhuillier, D. Weiskopf","doi":"10.20380/GI2020.38","DOIUrl":"https://doi.org/10.20380/GI2020.38","url":null,"abstract":"We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our clusterflow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on A*. It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"382-392"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45515086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computer Vision Applications and their Ethical Risks in the Global South 全球南方的计算机视觉应用及其伦理风险
Pub Date : 2020-01-25 DOI: 10.20380/GI2020.17
Charles-Olivier Dufresne Camaro, Fanny Chevalier, Syed Ishtiaque Ahmed
We present a study of recent advances in computer vision (CV) research for the Global South to identify the main uses of modern CV and its most significant ethical risks in the region. We review 55 research papers and analyze them along three principal dimensions: where the technology was designed, the needs addressed by the technology, and the potential ethical risks arising following deployment. Results suggest: 1) CV is most used in policy planning and surveillance applications, 2) privacy violations is the most likely and most severe risk to arise from modern CV systems designed for the Global South, and 3) researchers from the Global North differ from researchers from the Global South in their uses of CV to solve problems in the Global South. Results of our risk analysis also differ from previous work on CV risk perception in the West, suggesting locality to be a critical component of each risk’s importance.
我们为全球南方提供了一项关于计算机视觉(CV)研究最新进展的研究,以确定现代CV的主要用途及其在该地区最重大的道德风险。我们回顾了55篇研究论文,并从三个主要方面对其进行了分析:技术的设计地点、技术解决的需求以及部署后产生的潜在道德风险。结果表明:1)CV在政策规划和监控应用中使用最多,2)隐私侵犯是为全球南方设计的现代CV系统最有可能和最严重的风险,3)全球北方的研究人员与全球南方的研究人员在使用CV解决全球南方问题方面存在差异。我们的风险分析结果也与西方以前关于CV风险感知的工作不同,表明地区是每种风险重要性的关键组成部分。
{"title":"Computer Vision Applications and their Ethical Risks in the Global South","authors":"Charles-Olivier Dufresne Camaro, Fanny Chevalier, Syed Ishtiaque Ahmed","doi":"10.20380/GI2020.17","DOIUrl":"https://doi.org/10.20380/GI2020.17","url":null,"abstract":"We present a study of recent advances in computer vision (CV) research for the Global South to identify the main uses of modern CV and its most significant ethical risks in the region. We review 55 research papers and analyze them along three principal dimensions: where the technology was designed, the needs addressed by the technology, and the potential ethical risks arising following deployment. Results suggest: 1) CV is most used in policy planning and surveillance applications, 2) privacy violations is the most likely and most severe risk to arise from modern CV systems designed for the Global South, and 3) researchers from the Global North differ from researchers from the Global South in their uses of CV to solve problems in the Global South. Results of our risk analysis also differ from previous work on CV risk perception in the West, suggesting locality to be a critical component of each risk’s importance.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"158-167"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41372962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Scope and Impact of Visualization in Training Professionals in Academic Medicine 可视化在学术医学专业人员培训中的范围和影响
Pub Date : 2020-01-25 DOI: 10.20380/GI2020.10
V. Bandi, Debajyoti Mondal, B. Thoma
Professional training often requires need-based scheduling and observation-based assessment. In this paper, we present a visualization platform for managing such training data in a medical education domain, where the learners are resident physicians and the educators are certified doctors. The system was developed through four focus groups with the residents and their educators over six major development iterations. We present how the professionals involved, nature of training, choice of the display devices, and the overall assessment process influenced the design of the visualizations. The final system was deployed as a web tool for the department of emergency medicine, and evaluated by both the residents and their educators in an uncontrolled longitudinal study. Our analysis of four months of user logs revealed interesting usage patterns consistent with real-life training events and showed an improvement in several key learning metrics when compared to historical values during the same study period. The users’ feedback showed that both educators and residents found our system to be helpful in real-life decision making. *e-mail: venkat.bandi@usask.ca †e-mail: d.mondal@usask.ca ‡e-mail: brent.thoma@usask.ca
专业培训通常需要基于需求的日程安排和基于观察的评估。在本文中,我们提出了一个在医学教育领域管理此类培训数据的可视化平台,其中学习者是住院医生,教育者是认证医生。该系统是通过四个焦点小组与居民及其教育工作者在六次主要开发迭代中开发的。我们介绍了相关专业人员、培训性质、显示设备的选择以及整体评估过程如何影响可视化设计。最终的系统被部署为急诊医学部的网络工具,并由住院医生及其教育工作者在一项不受控制的纵向研究中进行了评估。我们对四个月用户日志的分析揭示了与现实训练事件一致的有趣使用模式,并显示与同一研究期间的历史值相比,几个关键学习指标有所改善。用户的反馈表明,教育工作者和居民都发现我们的系统对现实生活中的决策很有帮助*电子邮件:venkat.bandi@usask.ca†电子邮件:d.mondal@usask.ca†电子邮件:brent.thoma@usask.ca
{"title":"Scope and Impact of Visualization in Training Professionals in Academic Medicine","authors":"V. Bandi, Debajyoti Mondal, B. Thoma","doi":"10.20380/GI2020.10","DOIUrl":"https://doi.org/10.20380/GI2020.10","url":null,"abstract":"Professional training often requires need-based scheduling and observation-based assessment. In this paper, we present a visualization platform for managing such training data in a medical education domain, where the learners are resident physicians and the educators are certified doctors. The system was developed through four focus groups with the residents and their educators over six major development iterations. We present how the professionals involved, nature of training, choice of the display devices, and the overall assessment process influenced the design of the visualizations. The final system was deployed as a web tool for the department of emergency medicine, and evaluated by both the residents and their educators in an uncontrolled longitudinal study. Our analysis of four months of user logs revealed interesting usage patterns consistent with real-life training events and showed an improvement in several key learning metrics when compared to historical values during the same study period. The users’ feedback showed that both educators and residents found our system to be helpful in real-life decision making. *e-mail: venkat.bandi@usask.ca †e-mail: d.mondal@usask.ca ‡e-mail: brent.thoma@usask.ca","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"84-94"},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43662126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. Graphics Interface (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1