首页 > 最新文献

Proceedings. Graphics Interface (Conference)最新文献

英文 中文
A conversation with CHCCS 2020 achievement award winner Ravin Balakrishnan 与CHCCS 2020成就奖得主Ravin Balakrishnan的对话
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.01
R. Balakrishnan
The 2020 CHCCS/SCDHM Achievement Award from the Canadian Human-Computer Communications Society is presented to Dr. Ravin Balakrishnan. This award recognizes his significant and varied contributions in the areas of Human Computer Interaction (HCI), Information and Communications Technology for Development, and Interactive Computer Graphics. Ravin’s work has had a tremendous impact on real-world applications. His research includes early innovations in areas such as 3D user interfaces, large display input, multitouch gestures, freehand input, and pen-based computing, which has informed and inspired techniques and technologies that are now commonplace in commercial products. a conversation between Ravin Balakrishnan and Prof. Tovi Grossman (University of Toronto) that took place in April, 2020.
2020 CHCCS/SCDHM成就奖由加拿大人机通信协会颁发给Ravin Balakrishnan博士。该奖项旨在表彰他在人机交互(HCI)、信息和通信技术促进发展以及交互式计算机图形学领域做出的重大贡献。Ravin的工作对现实世界的应用产生了巨大的影响。他的研究包括在3D用户界面、大屏幕显示输入、多点触控手势、徒手输入和基于笔的计算等领域的早期创新,这些创新为现在商业产品中常见的技术和技术提供了信息和灵感。Ravin Balakrishnan和Tovi Grossman教授(多伦多大学)于2020年4月进行的对话。
{"title":"A conversation with CHCCS 2020 achievement award winner Ravin Balakrishnan","authors":"R. Balakrishnan","doi":"10.20380/GI2020.01","DOIUrl":"https://doi.org/10.20380/GI2020.01","url":null,"abstract":"The 2020 CHCCS/SCDHM Achievement Award from the Canadian Human-Computer Communications Society is presented to Dr. Ravin Balakrishnan. This award recognizes his significant and varied contributions in the areas of Human Computer Interaction (HCI), Information and Communications Technology for Development, and Interactive Computer Graphics. Ravin’s work has had a tremendous impact on real-world applications. His research includes early innovations in areas such as 3D user interfaces, large display input, multitouch gestures, freehand input, and pen-based computing, which has informed and inspired techniques and technologies that are now commonplace in commercial products. a conversation between Ravin Balakrishnan and Prof. Tovi Grossman (University of Toronto) that took place in April, 2020.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"23 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83705785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ColorArt: Suggesting Colorizations For Graphic Arts Using Optimal Color-Graph Matching ColorArt:建议使用最佳颜色图形匹配的图形艺术着色
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.11
Murtuza Bohra, Vineet Gandhi
Colorization is a complex task of selecting a combination of colors and arriving at an appropriate spatial arrangement of the colors in an image. In this paper, we propose a novel approach for automatic colorization of graphic arts like graphic patterns, info-graphics and cartoons. Our approach uses the artist’s colored graphics as a reference to color a template image. We also propose a retrieval system for selecting a relevant reference image corresponding to the given template from a dataset of reference images colored by different artists. Finally, we formulate the problem of colorization as a optimal graph matching problem over color groups in the reference and the template image. We demonstrate results on a variety of coloring tasks and evaluate our model through multiple perceptual studies. The studies show that the results generated through our model are significantly preferred by the participants over other automatic colorization methods.
着色是一项复杂的任务,它需要选择颜色的组合,并在图像中达到颜色的适当空间安排。在本文中,我们提出了一种新的方法来自动着色图形艺术,如图形图案,信息图形和卡通。我们的方法使用艺术家的彩色图形作为模板图像上色的参考。我们还提出了一个检索系统,用于从不同艺术家着色的参考图像数据集中选择与给定模板对应的相关参考图像。最后,我们将着色问题表述为参考图像和模板图像中颜色组的最优图匹配问题。我们展示了各种着色任务的结果,并通过多个感知研究评估了我们的模型。研究表明,通过我们的模型生成的结果明显优于其他自动着色方法。
{"title":"ColorArt: Suggesting Colorizations For Graphic Arts Using Optimal Color-Graph Matching","authors":"Murtuza Bohra, Vineet Gandhi","doi":"10.20380/GI2020.11","DOIUrl":"https://doi.org/10.20380/GI2020.11","url":null,"abstract":"Colorization is a complex task of selecting a combination of colors and arriving at an appropriate spatial arrangement of the colors in an image. In this paper, we propose a novel approach for automatic colorization of graphic arts like graphic patterns, info-graphics and cartoons. Our approach uses the artist’s colored graphics as a reference to color a template image. We also propose a retrieval system for selecting a relevant reference image corresponding to the given template from a dataset of reference images colored by different artists. Finally, we formulate the problem of colorization as a optimal graph matching problem over color groups in the reference and the template image. We demonstrate results on a variety of coloring tasks and evaluate our model through multiple perceptual studies. The studies show that the results generated through our model are significantly preferred by the participants over other automatic colorization methods.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"90 1","pages":"95-102"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75018607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Bend or PIN: Studying Bend Password Authentication with People with Vision Impairment 弯曲或PIN:研究弯曲密码认证与视力障碍的人
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.19
Daniella Briotto Faustino, Sara Nabil, A. Girouard
People living with vision impairment can be vulnerable to attackers when entering passwords on their smartphones, as their technology is more 'observable'. While researchers have proposed tangible interactions such as bend input as an alternative authentication method, limited work have evaluated this method with people with vision impairment. This paper extends previous work by presenting our user study of bend passwords with 16 participants who live with varying levels of vision impairment or blindness. Each participant created their own passwords using both PIN codes and BendyPass, a combination of bend gestures performed on a flexible device. We explored whether BendyPass does indeed offer greater opportunity over PINs and evaluated the usability of both. Our findings show bend passwords have learnability and memorability potential as a tactile authentication method for people with vision impairment, and could be faster to enter than PINs. However, BendyPass still has limitations relating to security and usability.
视力受损的人在智能手机上输入密码时很容易受到攻击,因为他们的技术更容易被“观察到”。虽然研究人员已经提出了有形的相互作用,如弯曲输入作为一种替代认证方法,但有限的工作已经对视力受损的人进行了评估。这篇论文扩展了以前的工作,展示了我们对弯曲密码的用户研究,有16名参与者生活在不同程度的视力障碍或失明中。每位参与者都使用PIN码和BendyPass(在一个灵活的设备上进行弯曲手势的组合)创建了自己的密码。我们探讨了BendyPass是否确实比pin提供了更多的机会,并评估了两者的可用性。我们的研究结果表明,弯曲密码作为一种视觉障碍人士的触觉认证方法,具有易学和记忆的潜力,而且输入速度可能比pin码更快。然而,BendyPass在安全性和可用性方面仍然存在局限性。
{"title":"Bend or PIN: Studying Bend Password Authentication with People with Vision Impairment","authors":"Daniella Briotto Faustino, Sara Nabil, A. Girouard","doi":"10.20380/GI2020.19","DOIUrl":"https://doi.org/10.20380/GI2020.19","url":null,"abstract":"People living with vision impairment can be vulnerable to attackers when entering passwords on their smartphones, as their technology is more 'observable'. While researchers have proposed tangible interactions such as bend input as an alternative authentication method, limited work have evaluated this method with people with vision impairment. This paper extends previous work by presenting our user study of bend passwords with 16 participants who live with varying levels of vision impairment or blindness. Each participant created their own passwords using both PIN codes and BendyPass, a combination of bend gestures performed on a flexible device. We explored whether BendyPass does indeed offer greater opportunity over PINs and evaluated the usability of both. Our findings show bend passwords have learnability and memorability potential as a tactile authentication method for people with vision impairment, and could be faster to enter than PINs. However, BendyPass still has limitations relating to security and usability.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"36 1","pages":"183-191"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74498434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Gaggle: Visual Analytics for Model Space Navigation Gaggle:模型空间导航的可视化分析
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.15
Subhajit Das, Dylan Cashman, Remco Chang, A. Endert
Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems. However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models. Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from. This poses complexity to navigate this model space to find the right model for the data and the task. In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space. Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks. Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task. The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and auPaste the appropriate copyright statement here. ACM now supports three different copyright statements: • ACM copyright: ACM holds the copyright on the work. This is the historical approach. • License: The author(s) retain copyright, but ACM receives an exclusive publication license. • Open Access: The author(s) wish to pay for the work to be open access. The additional fee must be paid to ACM. This text field is large enough to hold the appropriate release statement assuming it is
最近的视觉分析系统利用多个机器学习模型来更好地拟合数据,而不是传统的单一、预定义的模型系统。然而,虽然多模型可视化分析系统可能是有效的,但它们增加的复杂性带来了可用性问题,因为用户需要与多个模型的参数进行交互。此外,各种模型算法和相关超参数的出现创造了一个详尽的模型空间来对模型进行采样。这使得导航该模型空间以为数据和任务找到正确的模型变得复杂。在本文中,我们提出了Gaggle,一个多模型视觉分析系统,使用户能够交互式地导航模型空间。Gaggle进一步将用户交互转化为推理,通过自动从高维模型空间中找到支持各种用户任务的最佳模型,简化了使用多个模型的工作。通过定性用户研究,我们展示了我们的方法如何帮助用户为分类和排序任务找到最佳模型。研究结果证实,Gaggle直观且易于使用,支持交互式模型空间导航,并在此处粘贴相应的版权声明。ACM现在支持三种不同的版权声明:•ACM版权:ACM拥有作品的版权。这是历史的方法。•许可:作者保留版权,但ACM获得独家出版许可。•开放获取:作者希望为作品的开放获取付费。额外费用必须支付给ACM。这个文本字段足够大,可以容纳适当的释放语句
{"title":"Gaggle: Visual Analytics for Model Space Navigation","authors":"Subhajit Das, Dylan Cashman, Remco Chang, A. Endert","doi":"10.20380/GI2020.15","DOIUrl":"https://doi.org/10.20380/GI2020.15","url":null,"abstract":"Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems. However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models. Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from. This poses complexity to navigate this model space to find the right model for the data and the task. In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space. Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks. Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task. The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and auPaste the appropriate copyright statement here. ACM now supports three different copyright statements: • ACM copyright: ACM holds the copyright on the work. This is the historical approach. • License: The author(s) retain copyright, but ACM receives an exclusive publication license. • Open Access: The author(s) wish to pay for the work to be open access. The additional fee must be paid to ACM. This text field is large enough to hold the appropriate release statement assuming it is","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"137-147"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83178091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
AnimationPak: Packing Elements with Scripted Animations AnimationPak:用脚本动画打包元素
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.39
Reza Adhitya Saputra, C. Kaplan, P. Asente
We present AnimationPak, a technique to create animated packings by arranging animated two-dimensional elements inside a static container. We represent animated elements in a three-dimensional spacetime domain, and view the animated packing problem as a three-dimensional packing in that domain. Every element is represented as a discretized spacetime mesh. In a physical simulation, meshes grow and repel each other, consuming the negative space in the container. The final animation frames are cross sections of the three-dimensional packing at a sequence of time values. The simulation trades off between the evenness of the negative space in the container, the temporal coherence of the animation, and the deformations of the elements. Elements can be guided around the container and the entire animation can be closed into a loop.
我们介绍了AnimationPak,这是一种通过在静态容器内排列动画二维元素来创建动画包装的技术。我们在三维时空域中表示动画元素,并将动画填充问题视为该域中的三维填充问题。每个元素都表示为离散的时空网格。在物理模拟中,网格生长并相互排斥,消耗容器中的负空间。最后的动画帧是三维包装在时间值序列上的横截面。模拟在容器中负空间的均匀性、动画的时间一致性和元素的变形之间进行权衡。元素可以在容器周围被引导,整个动画可以被封闭成一个循环。
{"title":"AnimationPak: Packing Elements with Scripted Animations","authors":"Reza Adhitya Saputra, C. Kaplan, P. Asente","doi":"10.20380/GI2020.39","DOIUrl":"https://doi.org/10.20380/GI2020.39","url":null,"abstract":"We present AnimationPak, a technique to create animated packings by arranging animated two-dimensional elements inside a static container. We represent animated elements in a three-dimensional spacetime domain, and view the animated packing problem as a three-dimensional packing in that domain. Every element is represented as a discretized spacetime mesh. In a physical simulation, meshes grow and repel each other, consuming the negative space in the container. The final animation frames are cross sections of the three-dimensional packing at a sequence of time values. The simulation trades off between the evenness of the negative space in the container, the temporal coherence of the animation, and the deformations of the elements. Elements can be guided around the container and the entire animation can be closed into a loop.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"19 1","pages":"393-403"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77157356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Local Editing of Cross-Surface Mappings with Iterative Least Squares Conformal Maps 迭代最小二乘共形映射的交叉曲面局部编辑
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.20
Donya Ghafourzadeh, Srinivasan Ramachandran, Martin de Lasa, T. Popa, Eric Paquette
In this paper, we propose a novel approach to improve a given surface mapping through local refinement. The approach receives an established mapping between two surfaces and follows four phases: (i) inspection of the mapping and creation of a sparse set of landmarks in mismatching regions; (ii) segmentation with a low-distortion region-growing process based on flattening the segmented parts; (iii) optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain; and (iv) aggregation of the mappings from segments to update the surface mapping. In addition, we propose a new approach to deform the mesh in order to meet constraints (in our case, the landmark alignment of phase (iii)). We incrementally adjust the cotangent weights for the constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will have low conformal distortion. Our new deformation approach, Iterative Least Squares Conformal Mapping (ILSCM), outperforms other low-distortion deformation methods. The approach is general, and we tested it by improving the mappings from different existing surface mapping methods. We also tested its effectiveness by editing the mappings for a variety of 3D objects.
在本文中,我们提出了一种新的方法,通过局部细化来改善给定的表面映射。该方法接收两个表面之间已建立的映射,并经过四个阶段:(i)检查映射并在不匹配区域中创建稀疏的地标集;(ii)基于平坦化被分割部分的低失真区域增长过程进行分割;(iii)优化被分割部件的变形,使其在平面参数化域内与地标对齐;(四)对来自片段的映射进行聚合,更新表面映射。此外,我们提出了一种新的方法来变形网格,以满足约束(在我们的情况下,阶段(iii)的地标对齐)。我们增量地调整约束的余切权值,并以一种方式应用变形,以保证变形的网格不会有翻转面,并且具有低保形失真。我们的新变形方法,迭代最小二乘共形映射(ILSCM),优于其他低变形变形方法。该方法具有通用性,并通过改进不同表面映射方法的映射对其进行了测试。我们还通过编辑各种3D对象的映射来测试其有效性。
{"title":"Local Editing of Cross-Surface Mappings with Iterative Least Squares Conformal Maps","authors":"Donya Ghafourzadeh, Srinivasan Ramachandran, Martin de Lasa, T. Popa, Eric Paquette","doi":"10.20380/GI2020.20","DOIUrl":"https://doi.org/10.20380/GI2020.20","url":null,"abstract":"In this paper, we propose a novel approach to improve a given surface mapping through local refinement. The approach receives an established mapping between two surfaces and follows four phases: (i) inspection of the mapping and creation of a sparse set of landmarks in mismatching regions; (ii) segmentation with a low-distortion region-growing process based on flattening the segmented parts; (iii) optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain; and (iv) aggregation of the mappings from segments to update the surface mapping. In addition, we propose a new approach to deform the mesh in order to meet constraints (in our case, the landmark alignment of phase (iii)). We incrementally adjust the cotangent weights for the constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will have low conformal distortion. Our new deformation approach, Iterative Least Squares Conformal Mapping (ILSCM), outperforms other low-distortion deformation methods. The approach is general, and we tested it by improving the mappings from different existing surface mapping methods. We also tested its effectiveness by editing the mappings for a variety of 3D objects.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"192-205"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79992574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Effect of Visual and Interactive Representations on Human Performance and Preference with Scalar Data Fields 使用标量数据域的视觉和交互表示对人的表现和偏好的影响
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.23
Han L. Han, Miguel A. Nacenta
2D scalar data fields are often represented as heatmaps because color can help viewers perceive structure without having to interpret individual digits. Although heatmaps and color mapping have received much research attention, there are alternative representations that have been generally overlooked and might overcome heatmap problems. For example, color perception is subject to context-based perceptual bias and high error, which can be addressed through representations that use digits to enable more accurate value reading. We designed a series of three experiments that compare five techniques: a regular table of digits (Digits), a state-of-the-art heatmap (Color), a heatmap with an interactive tooltip showing the value under the cursor (Tooltip), a heatmap with the digits overlapped over it (DigitsColor), and FatFonts. Data analysis from the three experiments, which test locating values, finding extrema, and clustering tasks, show that overlapping digits on color (DigitsColor) offers a substantial increase in accuracy (between 10 and 60 percent points of improvement over the plain heatmap (Color), depending on the task) at the cost of extra time when locating extrema or forming clusters, but none when locating values. The interactive tooltip offered a poor speed-accuracy tradeoff, but participants preferred it to the plain heatmap (color) or digits-only (Digits) representations. We conclude that hybrid color-digit representations of scalar data fields could be highly beneficial for uses where spatial resolution and speed are not the main concern.
二维标量数据字段通常表示为热图,因为颜色可以帮助观众感知结构,而不必解释单个数字。虽然热图和彩色映射已经受到了很多研究的关注,但有一些替代的表示通常被忽视,可能会克服热图问题。例如,颜色感知受到基于上下文的感知偏差和高误差的影响,这可以通过使用数字的表示来解决,从而实现更准确的值读取。我们设计了一系列的三个实验来比较五种技术:一个普通的数字表(digits),一个最先进的热图(Color),一个带有交互式工具提示的热图,显示光标下的值(tooltip),一个数字重叠的热图(DigitsColor),和FatFonts。来自三个测试定位值、寻找极值和聚类任务的实验的数据分析表明,在定位极值或形成聚类时,以额外的时间为代价,在颜色上重叠数字(DigitsColor)提供了准确性的大幅提高(比普通热图(color)提高10%到60%,具体取决于任务),但在定位值时没有。交互式工具提示提供了一个糟糕的速度-准确性权衡,但参与者更喜欢它,而不是普通的热图(彩色)或纯数字(数字)表示。我们得出的结论是,标量数据字段的混合颜色-数字表示对于空间分辨率和速度不是主要关注点的使用可能非常有益。
{"title":"The Effect of Visual and Interactive Representations on Human Performance and Preference with Scalar Data Fields","authors":"Han L. Han, Miguel A. Nacenta","doi":"10.20380/GI2020.23","DOIUrl":"https://doi.org/10.20380/GI2020.23","url":null,"abstract":"2D scalar data fields are often represented as heatmaps because color can help viewers perceive structure without having to interpret individual digits. Although heatmaps and color mapping have received much research attention, there are alternative representations that have been generally overlooked and might overcome heatmap problems. For example, color perception is subject to context-based perceptual bias and high error, which can be addressed through representations that use digits to enable more accurate value reading. We designed a series of three experiments that compare five techniques: a regular table of digits (Digits), a state-of-the-art heatmap (Color), a heatmap with an interactive tooltip showing the value under the cursor (Tooltip), a heatmap with the digits overlapped over it (DigitsColor), and FatFonts. Data analysis from the three experiments, which test locating values, finding extrema, and clustering tasks, show that overlapping digits on color (DigitsColor) offers a substantial increase in accuracy (between 10 and 60 percent points of improvement over the plain heatmap (Color), depending on the task) at the cost of extra time when locating extrema or forming clusters, but none when locating values. The interactive tooltip offered a poor speed-accuracy tradeoff, but participants preferred it to the plain heatmap (color) or digits-only (Digits) representations. We conclude that hybrid color-digit representations of scalar data fields could be highly beneficial for uses where spatial resolution and speed are not the main concern.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"45 1","pages":"225-235"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79714098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation StarHopper:用于远程对象中心无人机导航的触摸界面
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.32
Jiannan Li, Ravin Balakrishnan, Tovi Grossman
Camera drones, a rapidly emerging technology, offer people the ability to remotely inspect an environment with a high degree of mobility and agility. However, manual remote piloting of a drone is prone to errors. In contrast, autopilot systems can require a significant degree of environmental knowledge and are not necessarily designed to support flexible visual inspections. Inspired by camera manipulation techniques in interactive graphics, we designed StarHopper, a novel touch screen interface for efficient object-centric camera drone navigation
摄像无人机是一项迅速兴起的技术,它为人们提供了高度机动性和敏捷性的远程检查环境的能力。然而,人工遥控无人机容易出错。相比之下,自动驾驶系统可能需要相当程度的环境知识,并且不一定能够支持灵活的视觉检测。受交互式图形中的相机操作技术的启发,我们设计了StarHopper,这是一种新颖的触摸屏界面,用于高效的以物体为中心的相机无人机导航
{"title":"StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation","authors":"Jiannan Li, Ravin Balakrishnan, Tovi Grossman","doi":"10.20380/GI2020.32","DOIUrl":"https://doi.org/10.20380/GI2020.32","url":null,"abstract":"Camera drones, a rapidly emerging technology, offer people the ability to remotely inspect an environment with a high degree of mobility and agility. However, manual remote piloting of a drone is prone to errors. In contrast, autopilot systems can require a significant degree of environmental knowledge and are not necessarily designed to support flexible visual inspections. Inspired by camera manipulation techniques in interactive graphics, we designed StarHopper, a novel touch screen interface for efficient object-centric camera drone navigation","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"317-326"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74898110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Lean-Interaction: passive image manipulation in concurrent multitasking 精益交互:并发多任务中的被动图像处理
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.40
D. Schott, Benjamin Hatscher, F. Joeres, Mareike Gabele, Steffi Hußlein, C. Hansen
Complex bi-manual tasks often benefit from supporting visual information and guidance. Controlling the system that provides this information is a secondary task that forces the user to perform concurrent multitasking, which in turn may affect the main task performance. Interactions based on natural behavior are a promising solution to this challenge. We investigated the performance of these interactions in a handsfree image manipulation task during a primary manual task with an upright stance. Essential tasks were extracted from the example of clinical workflow and turned into an abstract simulation to gain general insights into how different interaction techniques impact the user’s performance and workload. The interaction techniques we compared were full-body movements, facial expression, gesture and speech input. We found that leaning as an interaction technique facilitates significantly faster image manipulation at lower subjective workloads than facial expression. Our results pave the way towards efficient, natural, hands-free interaction in a challenging multitasking environment.
复杂的双手动任务通常受益于支持视觉信息和指导。控制提供这些信息的系统是一个次要任务,它迫使用户执行并发多任务,这反过来可能影响主任务的性能。基于自然行为的交互是应对这一挑战的一个很有希望的解决方案。我们研究了这些交互的性能在免提图像处理任务期间的主要手动任务与直立姿态。从临床工作流程示例中提取基本任务,并将其转换为抽象模拟,以获得关于不同交互技术如何影响用户性能和工作量的一般见解。我们比较的交互技术是全身动作、面部表情、手势和语音输入。我们发现,学习作为一种交互技术,在较低的主观工作量下,比面部表情更能促进更快的图像处理。我们的研究结果为在具有挑战性的多任务环境中实现高效、自然、免提的交互铺平了道路。
{"title":"Lean-Interaction: passive image manipulation in concurrent multitasking","authors":"D. Schott, Benjamin Hatscher, F. Joeres, Mareike Gabele, Steffi Hußlein, C. Hansen","doi":"10.20380/GI2020.40","DOIUrl":"https://doi.org/10.20380/GI2020.40","url":null,"abstract":"Complex bi-manual tasks often benefit from supporting visual information and guidance. Controlling the system that provides this information is a secondary task that forces the user to perform concurrent multitasking, which in turn may affect the main task performance. Interactions based on natural behavior are a promising solution to this challenge. We investigated the performance of these interactions in a handsfree image manipulation task during a primary manual task with an upright stance. Essential tasks were extracted from the example of clinical workflow and turned into an abstract simulation to gain general insights into how different interaction techniques impact the user’s performance and workload. The interaction techniques we compared were full-body movements, facial expression, gesture and speech input. We found that leaning as an interaction technique facilitates significantly faster image manipulation at lower subjective workloads than facial expression. Our results pave the way towards efficient, natural, hands-free interaction in a challenging multitasking environment.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"14 1","pages":"404-412"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73972830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing in VR 在VR中使用实时光线追踪的经典非欧几里得空间的沉浸式可视化
Pub Date : 2020-01-01 DOI: 10.20380/GI2020.42
L. Velho, V. Silva, Tiago Novello
This paper presents a system for immersive visualization of Non-Euclidean spaces using real-time ray tracing. It exploits the capabilities of the new generation of GPU’s based on the NVIDIA’s Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality.
本文提出了一种基于实时光线追踪的非欧几里德空间沉浸式可视化系统。它利用了基于NVIDIA的图灵架构的新一代GPU的功能,以开发新的方法来直观地探索虚拟现实中具有非平凡几何和拓扑结构的景观。
{"title":"Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing in VR","authors":"L. Velho, V. Silva, Tiago Novello","doi":"10.20380/GI2020.42","DOIUrl":"https://doi.org/10.20380/GI2020.42","url":null,"abstract":"This paper presents a system for immersive visualization of Non-Euclidean spaces using real-time ray tracing. It exploits the capabilities of the new generation of GPU’s based on the NVIDIA’s Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"21 1","pages":"423-430"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80826441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Proceedings. Graphics Interface (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1