首页 > 最新文献

Proceedings. Graphics Interface (Conference)最新文献

英文 中文
Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant. 让盲人用可穿戴智能手机助手填写纸质表格。
Pub Date : 2021-05-01 DOI: 10.20380/GI2021.18
Shirin Feiz, Anatoliy Borodin, Xiaojun Bi, I V Ramakrishnan

We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%.

我们展示了PaperPal,一个可穿戴智能手机助手,盲人可以用它来独立填写纸质表格。PaperPal的独特功能包括:一种新颖的3d打印附件,可以将传统的智能手机变成可调节相机角度的可穿戴设备;能够在平板固定桌和便携式剪贴板上工作;笔和纸的实时视频跟踪,其耦合到生成表单文本内容和指令的实时音频读出的接口,以引导用户到表单字段;并且支持在没有签名指南的情况下填写这些字段。本文主要关注PaperPal的一个重要方面,即PaperPal可穿戴元素的无障碍设计,以及盲人填写纸质表格的新型用户界面的设计、实现和评估。PaperPal与最近的一款基于智能手机的盲人填写纸质表格的助手不同,后者需要将智能手机和纸张放在固定的桌子上,填写表格需要签名指南,并且没有表格文本内容的音频朗读。PaperPal的设计是由一项单独的盲人研究提供的,该研究由8名盲人用户进行评估。结果表明,该方法可以在正确的位置填写表单字段,准确率达到96.7%。
{"title":"Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant.","authors":"Shirin Feiz,&nbsp;Anatoliy Borodin,&nbsp;Xiaojun Bi,&nbsp;I V Ramakrishnan","doi":"10.20380/GI2021.18","DOIUrl":"https://doi.org/10.20380/GI2021.18","url":null,"abstract":"<p><p>We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%.</p>","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"2021 ","pages":"156-165"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8857727/pdf/nihms-1777375.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39635839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection. 贝叶斯注视:一种基于眼睛注视的目标选择贝叶斯方法。
Pub Date : 2021-05-01 DOI: 10.20380/GI2021.35
Zhi Li, Maozheng Zhao, Yifan Wang, Sina Rashidian, Furqan Baig, Rui Liu, Wanyu Liu, Michel Beaudouin-Lafon, Brooke Ellison, Fusheng Wang, Ramakrishnan, Xiaojun Bi

Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection.

通过眼注视输入准确、快速地选择目标仍然是一个有待研究的问题。在本文中,我们介绍了BayesGaze,一种贝叶斯方法来确定给定眼球注视轨迹的选定目标。该方法将眼球注视轨迹中的每个采样点视为选择目标的信号。然后利用贝叶斯定理计算给定采样点选择目标的后验概率,并将后验概率按采样间隔加权累加,确定所选目标。将选择结果反馈到目标的先验分布中,更新目标的先验分布,并建立分类分布模型。我们的研究表明,BayesGaze比基于驻留的选择方法和重心映射(CM)方法提高了目标选择的精度和速度。我们的研究表明,积累后验和融合先验都可以有效地提高基于眼睛注视的目标选择的性能。
{"title":"BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection.","authors":"Zhi Li,&nbsp;Maozheng Zhao,&nbsp;Yifan Wang,&nbsp;Sina Rashidian,&nbsp;Furqan Baig,&nbsp;Rui Liu,&nbsp;Wanyu Liu,&nbsp;Michel Beaudouin-Lafon,&nbsp;Brooke Ellison,&nbsp;Fusheng Wang,&nbsp;Ramakrishnan,&nbsp;Xiaojun Bi","doi":"10.20380/GI2021.35","DOIUrl":"https://doi.org/10.20380/GI2021.35","url":null,"abstract":"<p><p>Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection.</p>","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"2021 ","pages":"231-240"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8853835/pdf/nihms-1777407.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39635840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Personal+Context navigation: combining AR and shared displays in network path-following 个人+上下文导航:结合AR和网络路径跟踪中的共享显示
Pub Date : 2020-05-19 DOI: 10.20380/GI2020.27
R. James, A. Bezerianos, O. Chapuis, Maxime Cordeil, Tim Dwyer, Arnaud Prouzeau
Shared displays are well suited to public viewing and collaboration, however they lack personal space to view private information and act without disturbing others. Combining them with Augmented Reality (AR) headsets allows interaction without altering the context on the shared display. We study a set of such interaction techniques in the context of network navigation, in particular path following, an important network analysis task. Applications abound, for example planning private trips on a network map shown on a public display.The proposed techniques allow for hands-free interaction, rendering visual aids inside the headset, in order to help the viewer maintain a connection between the AR cursor and the network that is only shown on the shared display. In two experiments on path following, we found that adding persistent connections between the AR cursor and the network on the shared display works well for high precision tasks, but more transient connections work best for lower precision tasks. More broadly, we show that combining personal AR interaction with shared displays is feasible for network navigation.
共享显示器非常适合公众观看和协作,但它们缺乏查看私人信息和在不干扰他人的情况下采取行动的个人空间。将它们与增强现实(AR)耳机相结合,可以在不改变共享显示器上的上下文的情况下进行交互。我们在网络导航的背景下研究了一组这样的交互技术,特别是路径跟踪,这是一项重要的网络分析任务。应用程序比比皆是,例如在公共显示器上显示的网络地图上规划私人旅行。所提出的技术允许免提交互,在耳机内呈现视觉辅助,以帮助观看者保持AR光标和仅在共享显示器上显示的网络之间的连接。在两个关于路径跟踪的实验中,我们发现在共享显示器上添加AR光标和网络之间的持久连接对于高精度任务效果良好,但更多的瞬态连接对于低精度任务效果最好。更广泛地说,我们表明将个人AR交互与共享显示器相结合对于网络导航是可行的。
{"title":"Personal+Context navigation: combining AR and shared displays in network path-following","authors":"R. James, A. Bezerianos, O. Chapuis, Maxime Cordeil, Tim Dwyer, Arnaud Prouzeau","doi":"10.20380/GI2020.27","DOIUrl":"https://doi.org/10.20380/GI2020.27","url":null,"abstract":"Shared displays are well suited to public viewing and collaboration, however they lack personal space to view private information and act without disturbing others. Combining them with Augmented Reality (AR) headsets allows interaction without altering the context on the shared display. We study a set of such interaction techniques in the context of network navigation, in particular path following, an important network analysis task. Applications abound, for example planning private trips on a network map shown on a public display.The proposed techniques allow for hands-free interaction, rendering visual aids inside the headset, in order to help the viewer maintain a connection between the AR cursor and the network that is only shown on the shared display. In two experiments on path following, we found that adding persistent connections between the AR cursor and the network on the shared display works well for high precision tasks, but more transient connections work best for lower precision tasks. More broadly, we show that combining personal AR interaction with shared displays is feasible for network navigation.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"267-278"},"PeriodicalIF":0.0,"publicationDate":"2020-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45877125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Interactive Exploration of Genomic Conservation 基因组保护的互动探索
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.09
V. Bandi, C. Gutwin
Comparative analysis in genomics involves comparing two or more genomes to identify conserved genetic information. These duplicated regions can indicate shared ancestry and can shed light on an organism’s internal functions and evolutionary history. Due to rapid advances in sequencing technology, high-resolution genome data is now available for a wide range of species, and comparative analysis of this data can provide insights that can be applied in medicine, plant breeding, and many other areas. Comparative genomics is a strongly interactive task, and visualizing the location, size, and orientation of conserved regions can assist researchers by supporting critical activities of interpretation and judgement. However, visualization tools for the analysis of conserved regions have not Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. GI’20, May 21–22, 2020, Toronto, ON, Canada © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-6708-0/20/04. . . $15.00 DOI: https://doi.org/10.1145/3313831.XXXXXXX kept pace with the increasing availability of genomic information and the new ways in which this data is being used by biological researchers. To address this gap, we gathered requirements for interactive exploration from three groups of expert genomic scientists, and developed a web-based tool with novel interaction techniques and visual representations to meet those needs. Our tool supports multi-resolution analysis, provides interactive filtering as researchers move deeper into the genome, supports revisitation to specific interface configurations, and enables loosely-coupled collaboration over the genomic data. An evaluation of the system with five researchers from three expert groups provides evidence about the success of our system’s novel techniques for supporting interactive exploration of genomic conservation.
基因组学中的比较分析涉及比较两个或多个基因组以确定保守的遗传信息。这些重复的区域可以表明共同的祖先,并可以揭示生物体的内部功能和进化历史。由于测序技术的快速发展,高分辨率的基因组数据现在可以用于广泛的物种,对这些数据的比较分析可以提供可应用于医学,植物育种和许多其他领域的见解。比较基因组学是一项相互作用很强的任务,将保守区域的位置、大小和方向可视化可以帮助研究人员支持解释和判断的关键活动。但是,用于分析保护区域的可视化工具没有获得许可,可以免费制作本作品的全部或部分数字或硬拷贝供个人或课堂使用,前提是副本不是为了盈利或商业利益而制作或分发的,并且副本在第一页上带有本通知和完整的引用。本作品的版权由作者以外的人所有,必须得到尊重。允许有信用的摘要。以其他方式复制或重新发布,在服务器上发布或重新分发到列表,需要事先获得特定许可和/或付费。从permissions@acm.org请求权限。2020年5月21-22日,加拿大,多伦多©2020版权归所有人/作者所有。授权给ACM的出版权。Isbn 978-1-4503-6708-0/20/04…$15.00 DOI: https://doi.org/10.1145/3313831.XXXXXXX与不断增加的基因组信息的可用性以及生物研究人员使用这些数据的新方法保持同步。为了解决这一问题,我们从三组基因组专家那里收集了交互式探索的需求,并开发了一个基于网络的工具,该工具具有新颖的交互技术和可视化表示来满足这些需求。我们的工具支持多分辨率分析,在研究人员深入基因组时提供交互式过滤,支持重新访问特定接口配置,并支持基因组数据上的松耦合协作。来自三个专家组的五名研究人员对该系统进行了评估,证明了我们系统的新技术在支持基因组保护的交互式探索方面取得了成功。
{"title":"Interactive Exploration of Genomic Conservation","authors":"V. Bandi, C. Gutwin","doi":"10.20380/GI2020.09","DOIUrl":"https://doi.org/10.20380/GI2020.09","url":null,"abstract":"Comparative analysis in genomics involves comparing two or more genomes to identify conserved genetic information. These duplicated regions can indicate shared ancestry and can shed light on an organism’s internal functions and evolutionary history. Due to rapid advances in sequencing technology, high-resolution genome data is now available for a wide range of species, and comparative analysis of this data can provide insights that can be applied in medicine, plant breeding, and many other areas. Comparative genomics is a strongly interactive task, and visualizing the location, size, and orientation of conserved regions can assist researchers by supporting critical activities of interpretation and judgement. However, visualization tools for the analysis of conserved regions have not Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. GI’20, May 21–22, 2020, Toronto, ON, Canada © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-6708-0/20/04. . . $15.00 DOI: https://doi.org/10.1145/3313831.XXXXXXX kept pace with the increasing availability of genomic information and the new ways in which this data is being used by biological researchers. To address this gap, we gathered requirements for interactive exploration from three groups of expert genomic scientists, and developed a web-based tool with novel interaction techniques and visual representations to meet those needs. Our tool supports multi-resolution analysis, provides interactive filtering as researchers move deeper into the genome, supports revisitation to specific interface configurations, and enables loosely-coupled collaboration over the genomic data. An evaluation of the system with five researchers from three expert groups provides evidence about the success of our system’s novel techniques for supporting interactive exploration of genomic conservation.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"74-83"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41741499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Assistance for Target Selection in Mobile Augmented Reality 移动增强现实中的目标选择辅助
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.07
Vinod Asokan, Scott Bateman, Anthony Tang
Mobile augmented reality – where a mobile device is used to view and interact with virtual objects displayed in the real world – is becoming more common. Target selection is the main method of interaction in mobile AR, but is particularly difficult because targets in AR can have challenging characteristics such as moving or being occluded (by digital or real world objects). To address this problem, we conduct a comparative study of target assistance techniques designed for mobile AR. We compared four different cursor-based selection techniques against the standard touch-to-select interaction, finding that a newly adapted Bubble Cursorbased technique performs consistently best across a range of five target characteristics. Our work provides new findings demonstrating the promise of cursor-based target assistance in mobile AR.
移动增强现实——使用移动设备查看现实世界中显示的虚拟对象并与之交互——正变得越来越普遍。目标选择是移动AR中交互的主要方法,但特别困难,因为AR中的目标可能具有挑战性的特征,如移动或被(数字或现实世界的物体)遮挡。为了解决这个问题,我们对为移动AR设计的目标辅助技术进行了比较研究。我们将四种不同的基于光标的选择技术与标准的触摸选择交互进行了比较,发现一种新适应的基于气泡光标的技术在五种目标特征范围内始终表现最佳。我们的工作提供了新的发现,证明了基于光标的目标辅助在移动AR中的前景。
{"title":"Assistance for Target Selection in Mobile Augmented Reality","authors":"Vinod Asokan, Scott Bateman, Anthony Tang","doi":"10.20380/GI2020.07","DOIUrl":"https://doi.org/10.20380/GI2020.07","url":null,"abstract":"Mobile augmented reality – where a mobile device is used to view and interact with virtual objects displayed in the real world – is becoming more common. Target selection is the main method of interaction in mobile AR, but is particularly difficult because targets in AR can have challenging characteristics such as moving or being occluded (by digital or real world objects). To address this problem, we conduct a comparative study of target assistance techniques designed for mobile AR. We compared four different cursor-based selection techniques against the standard touch-to-select interaction, finding that a newly adapted Bubble Cursorbased technique performs consistently best across a range of five target characteristics. Our work provides new findings demonstrating the promise of cursor-based target assistance in mobile AR.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"56-65"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45296753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Presenting Information Closer to Mobile Crane Operators' Line of Sight: Designing and Evaluating Visualisation Concepts Based on Transparent Displays 向移动起重机操作员的视线展示信息:基于透明显示器的可视化概念的设计和评估
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.41
T. Sitompul, Rikard Lindell, Markus Wallmyr, Antti Siren
We have investigated the visualization of safety information for mobile crane operations utilizing transparent displays, where the information can be presented closer to operators’ line of sight with minimum obstruction on their view. The intention of the design is to help operators in acquiring supportive information provided by the machine, without requiring them to divert their attention far from operational areas. We started the design process by reviewing mobile crane safety guidelines to determine which information that operators need to know in order to perform safe operations. Using the findings from the safety guidelines review, we then conducted a design workshop to generate design ideas and visualisation concepts, as well as to delineate their appearances and behaviour based on the capability of transparent displays. We transformed the results of the workshop to a low-fidelity paper prototype, and then interviewed six mobile crane operators to obtain their feedback on the proposed concepts. The results of the study indicate that, as information will be presented closer to operators’ line of sight, we need to be selective on what kind of information and how much information that should be presented to operators. However, all the operators appreciated having information presented closer to their line of sight, as an approach that has the potential to improve safety in their operations.
我们已经研究了利用透明显示器对移动式起重机操作的安全信息进行可视化,在透明显示器中,信息可以更靠近操作员的视线,对他们的视线造成最小的阻碍。设计的目的是帮助操作员获取机器提供的支持信息,而不需要他们将注意力从操作区域转移开。我们在设计过程中首先审查了移动式起重机的安全指南,以确定操作员需要了解哪些信息才能进行安全操作。利用安全指南审查的结果,我们随后举办了一次设计研讨会,以产生设计理念和可视化概念,并根据透明显示器的能力描绘其外观和行为。我们将研讨会的结果转化为低保真度的纸张原型,然后采访了六名移动起重机操作员,以获得他们对所提出概念的反馈。研究结果表明,由于信息将在更靠近操作员视线的地方呈现,我们需要对应向操作员呈现的信息类型和数量进行选择。然而,所有运营商都很欣赏在更靠近他们视线的地方提供信息,这是一种有可能提高运营安全性的方法。
{"title":"Presenting Information Closer to Mobile Crane Operators' Line of Sight: Designing and Evaluating Visualisation Concepts Based on Transparent Displays","authors":"T. Sitompul, Rikard Lindell, Markus Wallmyr, Antti Siren","doi":"10.20380/GI2020.41","DOIUrl":"https://doi.org/10.20380/GI2020.41","url":null,"abstract":"We have investigated the visualization of safety information for mobile crane operations utilizing transparent displays, where the information can be presented closer to operators’ line of sight with minimum obstruction on their view. The intention of the design is to help operators in acquiring supportive information provided by the machine, without requiring them to divert their attention far from operational areas. We started the design process by reviewing mobile crane safety guidelines to determine which information that operators need to know in order to perform safe operations. Using the findings from the safety guidelines review, we then conducted a design workshop to generate design ideas and visualisation concepts, as well as to delineate their appearances and behaviour based on the capability of transparent displays. We transformed the results of the workshop to a low-fidelity paper prototype, and then interviewed six mobile crane operators to obtain their feedback on the proposed concepts. The results of the study indicate that, as information will be presented closer to operators’ line of sight, we need to be selective on what kind of information and how much information that should be presented to operators. However, all the operators appreciated having information presented closer to their line of sight, as an approach that has the potential to improve safety in their operations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"413-422"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43923433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-Axial Woven Tiles: Interlocking Space-Filling Shapes Based on Symmetries of Bi-Axial Weaving Patterns 双轴编织瓦:基于双轴编织图案对称性的互锁空间填充形状
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.29
Vinayak R. Krishnamurthy, E. Akleman, S. Subramanian, K. Boyd, Chia-an Fu, M. Ebert, Courtney Startett, N Yadav
shapes which we call bi-axial woven tiles. Our framework is based on a unique combina- tion of (1) Voronoi partitioning of space using curve segments as the Voronoi sites and (2) the design of these curve segments based on weave patterns closed under symmetry operations. The underlying weave geometry provides an interlocking property to the tiles and the closure property under symmetry operations ensure single tile can fill space. In order to demonstrate this general framework, we focus on specific symmetry operations induced by bi-axial weaving patterns. We specifically showcase the design and fabrication of woven tiles by using the most common 2-fold fabrics called 2-way genus-1 fabrics, namely, plain, twill, and satin weaves.
我们称之为双轴编织瓷砖的形状。我们的框架基于以下两个方面的独特组合:(1)使用曲线段作为Voronoi位点的空间Voronoi划分;(2)基于对称操作下闭合的编织图案的这些曲线段的设计。底层的编织几何结构为瓷砖提供了互锁特性,对称操作下的闭合特性确保了单个瓷砖可以填充空间。为了证明这一总体框架,我们重点关注双轴编织图案引起的特定对称操作。我们专门展示了编织瓷砖的设计和制造,使用了最常见的两层织物,即双向genus-1织物,也就是平纹、斜纹和缎面织物。
{"title":"Bi-Axial Woven Tiles: Interlocking Space-Filling Shapes Based on Symmetries of Bi-Axial Weaving Patterns","authors":"Vinayak R. Krishnamurthy, E. Akleman, S. Subramanian, K. Boyd, Chia-an Fu, M. Ebert, Courtney Startett, N Yadav","doi":"10.20380/GI2020.29","DOIUrl":"https://doi.org/10.20380/GI2020.29","url":null,"abstract":"shapes which we call bi-axial woven tiles. Our framework is based on a unique combina- tion of (1) Voronoi partitioning of space using curve segments as the Voronoi sites and (2) the design of these curve segments based on weave patterns closed under symmetry operations. The underlying weave geometry provides an interlocking property to the tiles and the closure property under symmetry operations ensure single tile can fill space. In order to demonstrate this general framework, we focus on specific symmetry operations induced by bi-axial weaving patterns. We specifically showcase the design and fabrication of woven tiles by using the most common 2-fold fabrics called 2-way genus-1 fabrics, namely, plain, twill, and satin weaves.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"286-298"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44107132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture 基于凝视的命令激活技术,使用停留然后手势抵抗意外激活
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.26
Toshiya Isomoto, Shota Yamanaka, B. Shizuki
We show a gaze-based command activation technique that is robust to unintentional command activations using a series of manipulation of dwelling on a target and performing a gesture (dwell-then-gesture manipulation). The gesture we adopt is a simple two-level stroke, which consists of a sequence of two orthogonal strokes. To achieve robustness against unintentional command activations, we design and fine-tune a gesture detection system based on how users move their gaze revealed through three experiments. Although our technique seems to just combine well-known dwell-based and gesture-based manipulations and to not be enough success rate, our work will be the first work that enriches the vocabulary, which is as much as mouse-based interaction.
我们展示了一种基于凝视的命令激活技术,该技术通过一系列停留在目标上并执行手势的操作(停留然后手势操作),对无意的命令激活具有鲁棒性。我们采用的手势是一个简单的两级笔画,由两个正交笔画组成。为了实现对无意命令激活的鲁棒性,我们设计并微调了一个手势检测系统,该系统基于通过三个实验揭示的用户如何移动视线。尽管我们的技术似乎只是结合了众所周知的基于停留和基于手势的操作,并且没有足够的成功率,但我们的工作将是第一项丰富词汇的工作,这与基于鼠标的交互一样多。
{"title":"Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture","authors":"Toshiya Isomoto, Shota Yamanaka, B. Shizuki","doi":"10.20380/GI2020.26","DOIUrl":"https://doi.org/10.20380/GI2020.26","url":null,"abstract":"We show a gaze-based command activation technique that is robust to unintentional command activations using a series of manipulation of dwelling on a target and performing a gesture (dwell-then-gesture manipulation). The gesture we adopt is a simple two-level stroke, which consists of a sequence of two orthogonal strokes. To achieve robustness against unintentional command activations, we design and fine-tune a gesture detection system based on how users move their gaze revealed through three experiments. Although our technique seems to just combine well-known dwell-based and gesture-based manipulations and to not be enough success rate, our work will be the first work that enriches the vocabulary, which is as much as mouse-based interaction.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"256-266"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49412142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Effects of Visual Distinctiveness on Learning and Retrieval in Icon Toolbars 视觉显著性对图标工具栏学习和检索的影响
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.12
Febi Chajadi, Md. Sami Uddin, C. Gutwin
Learnability is important in graphical interfaces because it supports the user’s transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable – but current “flat” and “subtle” designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons’ representations.
可学习性在图形界面中很重要,因为它支持用户过渡到专业知识。GUI可学习性的一个方面是工具栏和丝带中的图标的可识别性和可记忆性的程度——但是目前“扁平”和“微妙”的设计促进了强烈的视觉一致性,可能会通过降低一组图标的视觉独特性来阻碍学习。然而,关于图标的视觉独特性对选择性能和记忆的影响却知之甚少。为了解决这一差距,我们进行了两项研究,使用了几个视觉清晰度不同的图标集,并比较了人们学习和检索图标的速度。我们的第一项研究发现,没有证据表明增加颜色或形状的独特性能提高学习效果,但发现带有具体意象的图标更容易学习。我们的第二项研究发现了类似的结果:增加颜色或形状的独特性没有效果,但具有可识别图像的图标再次有明显的改善。我们的研究结果表明,视觉特征对UI易学性的影响似乎远远小于图标表示的含义。
{"title":"Effects of Visual Distinctiveness on Learning and Retrieval in Icon Toolbars","authors":"Febi Chajadi, Md. Sami Uddin, C. Gutwin","doi":"10.20380/GI2020.12","DOIUrl":"https://doi.org/10.20380/GI2020.12","url":null,"abstract":"Learnability is important in graphical interfaces because it supports the user’s transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable – but current “flat” and “subtle” designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons’ representations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"103-113"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46869580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
AffordIt!: A Tool for Authoring Object Component Behavior in Virtual Reality 负担!虚拟现实中创建对象组件行为的工具
Pub Date : 2020-04-04 DOI: 10.20380/GI2020.34
Sina Masnadi, Andrés N. Vargas González, Brian M. Williamson, J. Laviola
In this paper we present AffordIt!, a tool for adding affordances to the component parts of a virtual object. Following 3D scene reconstruction and segmentation procedures, users find themselves with complete virtual objects, but no intrinsic behaviors have been assigned, forcing them to use unfamiliar Desktop-based 3D editing tools. AffordIt! offers an intuitive solution that allows a user to select a region of interest for the mesh cutter tool, assign an intrinsic behavior and view an animation preview of their work. To evaluate the usability and workload of AffordIt! we ran an exploratory study to gather feedback. In the study we utilize two mesh cutter shapes that select a region of interest and two movement behaviors that a user then assigns to a common household object. The results show high usability with low workload ratings, demonstrating the feasibility of AffordIt! as a valuable 3D authoring tool. Based on these initial results we also present a road-map of future work that will improve the tool in future iterations.
在本文中,我们提出了AffordIt!,为虚拟对象的组成部分添加功能的工具。在3D场景重建和分割过程中,用户发现自己拥有完整的虚拟对象,但没有分配固有行为,迫使他们使用不熟悉的基于桌面的3D编辑工具。负担!提供了一个直观的解决方案,允许用户选择感兴趣的网格切割工具的区域,分配一个固有的行为,并查看他们的工作的动画预览。评估可用性和工作负荷的AffordIt!我们进行了一项探索性研究来收集反馈。在研究中,我们使用两个网格切割机形状来选择一个感兴趣的区域和两个移动行为,然后用户将其分配给一个常见的家庭物体。结果显示了高可用性和低工作负载评级,证明了AffordIt!作为一个有价值的3D创作工具。基于这些初始结果,我们还提出了未来工作的路线图,将在未来的迭代中改进该工具。
{"title":"AffordIt!: A Tool for Authoring Object Component Behavior in Virtual Reality","authors":"Sina Masnadi, Andrés N. Vargas González, Brian M. Williamson, J. Laviola","doi":"10.20380/GI2020.34","DOIUrl":"https://doi.org/10.20380/GI2020.34","url":null,"abstract":"In this paper we present AffordIt!, a tool for adding affordances to the component parts of a virtual object. Following 3D scene reconstruction and segmentation procedures, users find themselves with complete virtual objects, but no intrinsic behaviors have been assigned, forcing them to use unfamiliar Desktop-based 3D editing tools. AffordIt! offers an intuitive solution that allows a user to select a region of interest for the mesh cutter tool, assign an intrinsic behavior and view an animation preview of their work. To evaluate the usability and workload of AffordIt! we ran an exploratory study to gather feedback. In the study we utilize two mesh cutter shapes that select a region of interest and two movement behaviors that a user then assigns to a common household object. The results show high usability with low workload ratings, demonstrating the feasibility of AffordIt! as a valuable 3D authoring tool. Based on these initial results we also present a road-map of future work that will improve the tool in future iterations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"340-348"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42001090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings. Graphics Interface (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1