首页 > 最新文献

Proceedings of the International Conference on Advanced Visual Interfaces最新文献

英文 中文
A Question-Oriented Visualization Recommendation Approach for Data Exploration 面向问题的数据探索可视化推荐方法
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399849
R. A. D. Lima, Simone Diniz Junqueira Barbosa
The increasingly rapid growth of data production and the consequent need to explore data to obtain answers to the most varied questions have promoted the development of tools to facilitate the manipulation and construction of data visualizations. However, building useful data visualizations is not a trivial task: it may involve a large number of subtle decisions from experienced designers. In this paper, we present an approach that uses a set of heuristics to recommend data visualizations associated with questions, in order to facilitate the understanding of the recommendations and assisting the visual exploration process. Our approach was implemented and evaluated through the VisMaker tool. We carried out two studies comparing VisMaker with Voyager 2 and analyzed some aspects of the recommendation approaches through the participants' feedbacks. As a result, we found some advantages of our approach and gathered comments to help improve the development of visualization recommender tools.
数据生产的日益快速增长,以及随之而来的探索数据以获得各种问题答案的需求,促进了工具的发展,以方便数据可视化的操作和构建。然而,构建有用的数据可视化并不是一项微不足道的任务:它可能涉及经验丰富的设计人员的大量微妙决策。在本文中,我们提出了一种方法,该方法使用一组启发式方法来推荐与问题相关的数据可视化,以促进对建议的理解并协助视觉探索过程。我们的方法是通过VisMaker工具实施和评估的。我们进行了两项比较VisMaker和Voyager 2的研究,并通过参与者的反馈分析了推荐方法的一些方面。因此,我们发现了我们的方法的一些优点,并收集了意见,以帮助改进可视化推荐工具的开发。
{"title":"A Question-Oriented Visualization Recommendation Approach for Data Exploration","authors":"R. A. D. Lima, Simone Diniz Junqueira Barbosa","doi":"10.1145/3399715.3399849","DOIUrl":"https://doi.org/10.1145/3399715.3399849","url":null,"abstract":"The increasingly rapid growth of data production and the consequent need to explore data to obtain answers to the most varied questions have promoted the development of tools to facilitate the manipulation and construction of data visualizations. However, building useful data visualizations is not a trivial task: it may involve a large number of subtle decisions from experienced designers. In this paper, we present an approach that uses a set of heuristics to recommend data visualizations associated with questions, in order to facilitate the understanding of the recommendations and assisting the visual exploration process. Our approach was implemented and evaluated through the VisMaker tool. We carried out two studies comparing VisMaker with Voyager 2 and analyzed some aspects of the recommendation approaches through the participants' feedbacks. As a result, we found some advantages of our approach and gathered comments to help improve the development of visualization recommender tools.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ARCA. Semantic exploration of a bookstore ARCA。书店的语义探索
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399939
Eleonora Bernasconi, Miguel Ceriani, Massimo Mecella, T. Catarci, M. C. Capanna, Clara di Fazio, R. Marcucci, Erik Pender, Fabio Maria Petriccione
In this demo paper, we present ARCA, a visual-search based system that allows the semantic exploration of a bookstore. Navigating a domain-specific knowledge graph, students and researchers alike can start from any specific concept and reach any other related concept, discovering associated books and information. To achieve this paradigm of interaction we built a prototype system, flexible and adaptable to multiple contexts of use, that extracts semantic information from the contents of a books' corpus, building a dedicated knowledge graph that is linked to external knowledge bases. The web-based user interface of ARCA integrates text-based search, visual knowledge graph navigation, and linear visualization of filtered books (ordered according to multiple criteria) in a comprehensive coordinated view aimed at exploiting the underlying data while avoiding information overload and unnecessary cluttering. A proof-of-concept of ARCA is available online at http://arca.diag.uniroma1.it
在这篇演示论文中,我们介绍了ARCA,一个基于视觉搜索的系统,它允许对书店进行语义探索。通过浏览特定领域的知识图谱,学生和研究人员都可以从任何特定的概念开始,到达任何其他相关的概念,发现相关的书籍和信息。为了实现这种交互范例,我们构建了一个原型系统,灵活且可适应多种使用上下文,从图书语料库的内容中提取语义信息,构建一个与外部知识库链接的专用知识图。ARCA的基于web的用户界面将基于文本的搜索、视觉知识图导航和过滤图书的线性可视化(根据多个标准排序)集成在一个全面协调的视图中,旨在利用底层数据,同时避免信息过载和不必要的混乱。ARCA的概念验证可在http://arca.diag.uniroma1.it网站上获得
{"title":"ARCA. Semantic exploration of a bookstore","authors":"Eleonora Bernasconi, Miguel Ceriani, Massimo Mecella, T. Catarci, M. C. Capanna, Clara di Fazio, R. Marcucci, Erik Pender, Fabio Maria Petriccione","doi":"10.1145/3399715.3399939","DOIUrl":"https://doi.org/10.1145/3399715.3399939","url":null,"abstract":"In this demo paper, we present ARCA, a visual-search based system that allows the semantic exploration of a bookstore. Navigating a domain-specific knowledge graph, students and researchers alike can start from any specific concept and reach any other related concept, discovering associated books and information. To achieve this paradigm of interaction we built a prototype system, flexible and adaptable to multiple contexts of use, that extracts semantic information from the contents of a books' corpus, building a dedicated knowledge graph that is linked to external knowledge bases. The web-based user interface of ARCA integrates text-based search, visual knowledge graph navigation, and linear visualization of filtered books (ordered according to multiple criteria) in a comprehensive coordinated view aimed at exploiting the underlying data while avoiding information overload and unnecessary cluttering. A proof-of-concept of ARCA is available online at http://arca.diag.uniroma1.it","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122136189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
TuVe: A Shape-changeable Display using Fluids in a Tube TuVe:一种使用管内流体的可变形显示器
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399874
Saya Suzunaga, Yuichi Itoh, Yuki Inoue, Kazuyuki Fujita, T. Onoye
We propose TuVe, a novel shape-changing display consisting of a flexible tube and fluids, in which the droplets flowing through the tube compose the display medium that represents information. In this system, every colored droplet is flowed by controlling valves and a pump connected to the tube. The display part employs a flexible tube that can be shaped to any structure (e.g., wrapped around a specific object), which is achieved by a calibration made to capture the tube structure using image processing with a camera. A performance evaluation reveals that our prototype succeeds in controlling each droplet with a positional error of 2 mm or less, which is small enough to show such simple characters as alphabetic characters using a 7 × 7-pixel resolution display. We also discuss example applications, such as large public displays and flow-direction visualization, that illustrate the characteristics of the TuVe display.
我们提出了TuVe,一种由柔性管和流体组成的新型可变形显示器,其中流过管的液滴构成了代表信息的显示介质。在这个系统中,每个彩色液滴都通过控制阀和连接到管子上的泵来流动。显示部分采用一种柔性管,可以塑造成任何结构(例如,包裹在特定物体周围),这是通过使用相机图像处理来捕获管结构的校准来实现的。性能评估表明,我们的原型成功地将每个液滴的位置误差控制在2mm或以下,这个误差小到足以在7 × 7像素分辨率的显示器上显示字母字符等简单字符。我们还讨论了示例应用,例如大型公共显示和流向可视化,这些应用说明了TuVe显示的特性。
{"title":"TuVe: A Shape-changeable Display using Fluids in a Tube","authors":"Saya Suzunaga, Yuichi Itoh, Yuki Inoue, Kazuyuki Fujita, T. Onoye","doi":"10.1145/3399715.3399874","DOIUrl":"https://doi.org/10.1145/3399715.3399874","url":null,"abstract":"We propose TuVe, a novel shape-changing display consisting of a flexible tube and fluids, in which the droplets flowing through the tube compose the display medium that represents information. In this system, every colored droplet is flowed by controlling valves and a pump connected to the tube. The display part employs a flexible tube that can be shaped to any structure (e.g., wrapped around a specific object), which is achieved by a calibration made to capture the tube structure using image processing with a camera. A performance evaluation reveals that our prototype succeeds in controlling each droplet with a positional error of 2 mm or less, which is small enough to show such simple characters as alphabetic characters using a 7 × 7-pixel resolution display. We also discuss example applications, such as large public displays and flow-direction visualization, that illustrate the characteristics of the TuVe display.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129412202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Designing a Self-help Mobile App to Cope with Avoidance Behavior in Panic Disorder 设计一个自助手机应用程序来应对恐慌障碍中的回避行为
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399816
M. Paratore, Maria Claudia Buzzi, M. Buzzi
Panic disorder (PD) is an anxiety disorder that in recent years has spread worldwide. PD is diagnosed when a person has recurring panic attacks, characterized by physical symptoms and disturbing thoughts and feelings that arise rapidly, reach their peak in a few minutes and soon disappear. Panic attacks, despite being harmless and relatively short, are highly distressing and deeply affect the lives of patients, who very often develop agoraphobia, an anxiety disorder that leads to systematic avoidance of places where previous attacks have occurred. PD is often a chronic condition that does not respond well to pharmacological treatment. However, psychotherapeutic approaches such as mindfulness have proved to be quite effective and their delivery through self-care eHealth tools has been encouraged by the World Health Organization. In this paper, we present a self-aid mobile app designed by and for patients affected by PD with mild agoraphobia. The app is aimed at helping users cope with avoidance behavior. Thanks to geolocation, the app automatically detects the proximity of a "critical place (i.e., where a previous attack has occurred) and suggests mindfulness strategies for coping with stress, in order to prevent anxiety escalation and panic. This paper describes the therapeutic background of the proposed application, as well as the mHealth best practices we strove to adopt in the design phase. Preliminary trials conducted with one patient are encouraging; nonetheless, we point out the need for further and more extensive tests to fully assess the effectiveness of our approach.
惊恐障碍(PD)是近年来在世界范围内蔓延的一种焦虑症。PD的诊断是当一个人反复出现惊恐发作时,以身体症状和不安的想法和感觉为特征,迅速出现,在几分钟内达到顶峰,很快消失。惊恐发作,尽管是无害的,相对较短,但非常令人痛苦,并深深影响患者的生活,他们经常发展为广场恐怖症,这是一种焦虑症,导致系统地避开以前发作过的地方。PD通常是一种慢性疾病,药物治疗效果不佳。然而,正念等心理治疗方法已被证明是相当有效的,世界卫生组织鼓励通过自我保健电子保健工具提供这些方法。本文介绍了一款由PD伴轻度广场恐惧症患者设计的自助移动应用程序。这款应用旨在帮助用户应对回避行为。由于地理定位,该应用程序自动检测到“关键地点”(即以前发生过袭击的地方)的附近,并建议正念策略来应对压力,以防止焦虑升级和恐慌。本文描述了拟议应用程序的治疗背景,以及我们在设计阶段努力采用的移动健康最佳实践。在一名患者身上进行的初步试验令人鼓舞;尽管如此,我们指出有必要进行进一步和更广泛的测试,以充分评估我们的做法的有效性。
{"title":"Designing a Self-help Mobile App to Cope with Avoidance Behavior in Panic Disorder","authors":"M. Paratore, Maria Claudia Buzzi, M. Buzzi","doi":"10.1145/3399715.3399816","DOIUrl":"https://doi.org/10.1145/3399715.3399816","url":null,"abstract":"Panic disorder (PD) is an anxiety disorder that in recent years has spread worldwide. PD is diagnosed when a person has recurring panic attacks, characterized by physical symptoms and disturbing thoughts and feelings that arise rapidly, reach their peak in a few minutes and soon disappear. Panic attacks, despite being harmless and relatively short, are highly distressing and deeply affect the lives of patients, who very often develop agoraphobia, an anxiety disorder that leads to systematic avoidance of places where previous attacks have occurred. PD is often a chronic condition that does not respond well to pharmacological treatment. However, psychotherapeutic approaches such as mindfulness have proved to be quite effective and their delivery through self-care eHealth tools has been encouraged by the World Health Organization. In this paper, we present a self-aid mobile app designed by and for patients affected by PD with mild agoraphobia. The app is aimed at helping users cope with avoidance behavior. Thanks to geolocation, the app automatically detects the proximity of a \"critical place (i.e., where a previous attack has occurred) and suggests mindfulness strategies for coping with stress, in order to prevent anxiety escalation and panic. This paper describes the therapeutic background of the proposed application, as well as the mHealth best practices we strove to adopt in the design phase. Preliminary trials conducted with one patient are encouraging; nonetheless, we point out the need for further and more extensive tests to fully assess the effectiveness of our approach.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126774587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Space-free Gesture Interaction with Humanoid Robot 人形机器人的无空间手势交互
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399949
S. Humayoun, M. Faizan, Zuhair Zafar, K. Berns
In general, humanoid robots mostly use fixed-devices (e.g., camera or sensors) to detect human non-verbal communication, which have limitations in many real-life scenarios. Wearable devices could play an important role in many real-life scenarios. To address this, we propose using Myo armband for human-robot interaction using hand- and arm-based gestures. We present our end-to-end Spagti framework that is used first to train the user gestures using Myo armband and then to interact with a humanoid robot, called ROBIN, in real-time using space-free gestures.
一般来说,人形机器人大多使用固定设备(如摄像头或传感器)来检测人类的非语言交流,这在许多现实场景中都有局限性。可穿戴设备可以在许多现实生活场景中发挥重要作用。为了解决这个问题,我们建议使用Myo臂带进行人机交互,使用基于手和手臂的手势。我们展示了端到端的Spagti框架,首先使用Myo臂带训练用户手势,然后使用无空间手势与一个名为ROBIN的人形机器人实时交互。
{"title":"Space-free Gesture Interaction with Humanoid Robot","authors":"S. Humayoun, M. Faizan, Zuhair Zafar, K. Berns","doi":"10.1145/3399715.3399949","DOIUrl":"https://doi.org/10.1145/3399715.3399949","url":null,"abstract":"In general, humanoid robots mostly use fixed-devices (e.g., camera or sensors) to detect human non-verbal communication, which have limitations in many real-life scenarios. Wearable devices could play an important role in many real-life scenarios. To address this, we propose using Myo armband for human-robot interaction using hand- and arm-based gestures. We present our end-to-end Spagti framework that is used first to train the user gestures using Myo armband and then to interact with a humanoid robot, called ROBIN, in real-time using space-free gestures.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134139639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating User Preferences for Augmented Reality Interactions with the Internet of Things 评估用户对增强现实与物联网交互的偏好
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399716
Shreya Chopra, F. Maurer
We investigate user preferences for controlling IoT devices with headset-based Augment Reality (AR), comparing gestural control and voice control. An elicitation study is performed with 16 participants to gather their preferred voice commands and gestures for a set of referents. We analyzed 784 inputs (392 gestures and 392 voice) as well as observations and interviews to develop an empirical basis for design recommendations that form a guideline for future designers and implementors of such voice commands and gestures for interacting with the IoT via headset-based AR.
我们调查了用户对使用基于耳机的增强现实(AR)控制物联网设备的偏好,比较了手势控制和语音控制。对16名参与者进行了一项启发研究,以收集他们对一组指示物的首选语音命令和手势。我们分析了784个输入(392个手势和392个语音)以及观察和访谈,以开发设计建议的经验基础,为未来的设计师和实现者提供指导,通过基于头戴式增强现实的语音命令和手势与物联网进行交互。
{"title":"Evaluating User Preferences for Augmented Reality Interactions with the Internet of Things","authors":"Shreya Chopra, F. Maurer","doi":"10.1145/3399715.3399716","DOIUrl":"https://doi.org/10.1145/3399715.3399716","url":null,"abstract":"We investigate user preferences for controlling IoT devices with headset-based Augment Reality (AR), comparing gestural control and voice control. An elicitation study is performed with 16 participants to gather their preferred voice commands and gestures for a set of referents. We analyzed 784 inputs (392 gestures and 392 voice) as well as observations and interviews to develop an empirical basis for design recommendations that form a guideline for future designers and implementors of such voice commands and gestures for interacting with the IoT via headset-based AR.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114515349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Externalizing Mental Images by Harnessing Size-Describing Gestures: Design Implications for a Visualization System 利用尺寸描述手势外化心理图像:可视化系统的设计含义
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399920
S. A. Brown, Sharon Lynn Chu Yew Yee, Neha Rani
People use a significant amount of gestures when engaging in creative brainstorming. This is especially typical for creative workers who frequently convey ideas, designs, and stories to team members. These gestures produced during natural conversation contain information that is not necessarily conveyed through speech. This paper investigates the design of a system that uses people's gestures in natural communication contexts to produce external visualizations of their mental imagery, focusing on gestures that describe dimension-related information. While much psycholinguistics research address how gestures relate to the representations of concepts, little HCI work has explored the possibilities of harnessing gestures to support thinking. We conducted a study to explore how people gesture using a basic gesture-based visualization system in simulated creative gift design scenarios, towards the goal of deriving design implications. Both quantitative and qualitative data were collected from the study, allowing us to ascertain what features (e.g., users' spatial frames of reference and listener types) of a gesture-based visualization system need to be accounted for in design. Results showed that our system managed to visualize users' envisioned gift dimensions, but that visualized object area significantly affected users' perceived accuracy of the system. We extract themes as to what dimensions are important in the design of a gesture-based visualization system, and the possible uses of such a system from the participants' perspectives. We discuss implications for the design of gesture-based visualization systems to support creative work and possibilities for future directions of research.
在进行创造性头脑风暴时,人们会使用大量的手势。这对于经常向团队成员传达想法、设计和故事的创意工作者来说尤其典型。这些在自然对话中产生的手势包含了不一定通过语言传达的信息。本文研究了一个系统的设计,该系统使用人们在自然交流环境中的手势来产生他们心理意象的外部可视化,重点关注描述维度相关信息的手势。虽然许多心理语言学研究都在探讨手势与概念表征之间的关系,但很少有人机交互研究探索利用手势支持思维的可能性。我们进行了一项研究,探索人们在模拟创意礼品设计场景中如何使用基于基本手势的可视化系统,以获得设计启示。从研究中收集了定量和定性数据,使我们能够确定在设计中需要考虑基于手势的可视化系统的哪些特征(例如,用户的空间参考框架和听众类型)。结果表明,我们的系统成功地将用户设想的礼物尺寸可视化,但可视化的物体面积显著影响用户对系统的感知准确性。我们提取了一些主题,如在基于手势的可视化系统的设计中哪些维度是重要的,以及从参与者的角度来看这种系统的可能用途。我们讨论了基于手势的可视化系统设计的含义,以支持创造性工作和未来研究方向的可能性。
{"title":"Externalizing Mental Images by Harnessing Size-Describing Gestures: Design Implications for a Visualization System","authors":"S. A. Brown, Sharon Lynn Chu Yew Yee, Neha Rani","doi":"10.1145/3399715.3399920","DOIUrl":"https://doi.org/10.1145/3399715.3399920","url":null,"abstract":"People use a significant amount of gestures when engaging in creative brainstorming. This is especially typical for creative workers who frequently convey ideas, designs, and stories to team members. These gestures produced during natural conversation contain information that is not necessarily conveyed through speech. This paper investigates the design of a system that uses people's gestures in natural communication contexts to produce external visualizations of their mental imagery, focusing on gestures that describe dimension-related information. While much psycholinguistics research address how gestures relate to the representations of concepts, little HCI work has explored the possibilities of harnessing gestures to support thinking. We conducted a study to explore how people gesture using a basic gesture-based visualization system in simulated creative gift design scenarios, towards the goal of deriving design implications. Both quantitative and qualitative data were collected from the study, allowing us to ascertain what features (e.g., users' spatial frames of reference and listener types) of a gesture-based visualization system need to be accounted for in design. Results showed that our system managed to visualize users' envisioned gift dimensions, but that visualized object area significantly affected users' perceived accuracy of the system. We extract themes as to what dimensions are important in the design of a gesture-based visualization system, and the possible uses of such a system from the participants' perspectives. We discuss implications for the design of gesture-based visualization systems to support creative work and possibilities for future directions of research.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115842740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the Presentation of Information in Augmented Reality Headsets for Situational Awareness 检查增强现实头戴式耳机中用于态势感知的信息呈现
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399846
Julia Woodward, Jesse Smith, Isaac Wang, S. Cuenca, J. Ruiz
Augmented Reality (AR) headsets are being employed in industrial settings (e.g., the oil industry); however, there has been little work on how information should be presented in these headsets, especially in the context of situational awareness. We present a study examining three different presentation styles (Display, Environment, Mixed Environment) for textual secondary information in AR headsets. We found that the Display and Environment presentation styles assisted in perception and comprehension. Our work contributes a first step to understanding how to design visual information in AR headsets to support situational awareness.
增强现实(AR)头显正在工业环境中使用(例如,石油工业);然而,关于如何在这些耳机中呈现信息的研究很少,特别是在情境感知的背景下。我们提出了一项研究,研究了AR头显中文本次要信息的三种不同呈现风格(显示、环境、混合环境)。我们发现显示和环境的呈现风格有助于感知和理解。我们的工作为理解如何在AR头显中设计视觉信息以支持态势感知迈出了第一步。
{"title":"Examining the Presentation of Information in Augmented Reality Headsets for Situational Awareness","authors":"Julia Woodward, Jesse Smith, Isaac Wang, S. Cuenca, J. Ruiz","doi":"10.1145/3399715.3399846","DOIUrl":"https://doi.org/10.1145/3399715.3399846","url":null,"abstract":"Augmented Reality (AR) headsets are being employed in industrial settings (e.g., the oil industry); however, there has been little work on how information should be presented in these headsets, especially in the context of situational awareness. We present a study examining three different presentation styles (Display, Environment, Mixed Environment) for textual secondary information in AR headsets. We found that the Display and Environment presentation styles assisted in perception and comprehension. Our work contributes a first step to understanding how to design visual information in AR headsets to support situational awareness.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116268593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Framework for Biometric Recognition in Online Content Delivery Platforms 在线内容传递平台的生物特征识别框架
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399969
M. Marras, G. Fenu
In this paper, we introduce a modular framework that aims to empower online platforms with biometric-related capabilities, minimizing the user's interaction cost. First, we describe core concepts and architectural aspects characterizing the proposed framework. Then, as a use case, we integrate it in an e-learning platform to provide biometric recognition at the time of login and continuous identity verification in well-rounded areas of the platform.
在本文中,我们介绍了一个模块化框架,旨在赋予在线平台与生物识别相关的功能,最大限度地减少用户的交互成本。首先,我们描述了描述所提议框架特征的核心概念和体系结构方面。然后,作为一个用例,我们将其集成到一个电子学习平台中,在登录时提供生物识别,并在平台的各个区域提供连续的身份验证。
{"title":"A Framework for Biometric Recognition in Online Content Delivery Platforms","authors":"M. Marras, G. Fenu","doi":"10.1145/3399715.3399969","DOIUrl":"https://doi.org/10.1145/3399715.3399969","url":null,"abstract":"In this paper, we introduce a modular framework that aims to empower online platforms with biometric-related capabilities, minimizing the user's interaction cost. First, we describe core concepts and architectural aspects characterizing the proposed framework. Then, as a use case, we integrate it in an e-learning platform to provide biometric recognition at the time of login and continuous identity verification in well-rounded areas of the platform.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115133196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Data-Driven Captioning of Time-Series Line Charts 时间序列折线图的神经数据驱动字幕
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399829
Andrea Spreafico, G. Carenini
The success of neural methods for image captioning suggests that similar benefits can be reaped for generating captions for information visualizations. In this preliminary study, we focus on the very popular line charts. We propose a neural model which aims to generate text from the same data used to create a line chart. Due to the lack of suitable training corpora, we collected a dataset through crowdsourcing. Experiments indicate that our model outperforms relatively simple non-neural baselines.
神经方法用于图像字幕的成功表明,为信息可视化生成字幕也可以获得类似的好处。在这项初步研究中,我们将重点放在非常流行的折线图上。我们提出了一个神经模型,旨在从用于创建折线图的相同数据中生成文本。由于缺乏合适的训练语料库,我们通过众包的方式收集了一个数据集。实验表明,我们的模型优于相对简单的非神经基线。
{"title":"Neural Data-Driven Captioning of Time-Series Line Charts","authors":"Andrea Spreafico, G. Carenini","doi":"10.1145/3399715.3399829","DOIUrl":"https://doi.org/10.1145/3399715.3399829","url":null,"abstract":"The success of neural methods for image captioning suggests that similar benefits can be reaped for generating captions for information visualizations. In this preliminary study, we focus on the very popular line charts. We propose a neural model which aims to generate text from the same data used to create a line chart. Due to the lack of suitable training corpora, we collected a dataset through crowdsourcing. Experiments indicate that our model outperforms relatively simple non-neural baselines.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
Proceedings of the International Conference on Advanced Visual Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1