首页 > 最新文献

Proceedings of the International Conference on Advanced Visual Interfaces最新文献

英文 中文
AVI2CH 2020: Workshop on Advanced Visual Interfaces and Interactions in Cultural Heritage AVI2CH 2020:高级视觉界面与文化遗产互动研讨会
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3400869
Angeliki Antoniou, B. D. Carolis, G. Raptis, Cristina Gena, T. Kuflik, A. Dix, A. Origlia, George Lepouras
AVI2CH is a meeting place for researchers and practitioners focusing on the application of advanced information and communication technology in cultural heritage (CH) with a specific focus on user interfaces, visualization and interaction. It builds on a series of PATCH workshops, since 2007 including three at AVI and also a series of European workshops on cultural informatics. Eleven papers range from novel interfaces in museums to wider community engagement; all share a common mission to ensure that the latest digital technology helps preserve the past in ways that enrich the lives of current and future generations
AVI2CH是关注先进信息和通信技术在文化遗产(CH)中的应用的研究人员和从业者的聚会场所,特别关注用户界面,可视化和交互。它建立在自2007年以来的一系列PATCH研讨会的基础上,其中包括AVI的三个研讨会以及一系列关于文化信息学的欧洲研讨会。11篇论文涵盖了从博物馆的新界面到更广泛的社区参与;所有人都有一个共同的使命,即确保最新的数字技术有助于保护过去,丰富今世后代的生活
{"title":"AVI2CH 2020: Workshop on Advanced Visual Interfaces and Interactions in Cultural Heritage","authors":"Angeliki Antoniou, B. D. Carolis, G. Raptis, Cristina Gena, T. Kuflik, A. Dix, A. Origlia, George Lepouras","doi":"10.1145/3399715.3400869","DOIUrl":"https://doi.org/10.1145/3399715.3400869","url":null,"abstract":"AVI2CH is a meeting place for researchers and practitioners focusing on the application of advanced information and communication technology in cultural heritage (CH) with a specific focus on user interfaces, visualization and interaction. It builds on a series of PATCH workshops, since 2007 including three at AVI and also a series of European workshops on cultural informatics. Eleven papers range from novel interfaces in museums to wider community engagement; all share a common mission to ensure that the latest digital technology helps preserve the past in ways that enrich the lives of current and future generations","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123748471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Conversational Interfaces for a Smart Campus: A Case Study 智能校园的会话接口:案例研究
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399914
Marta Bortoli, M. Furini, S. Mirri, M. Montangero, Catia Prandi
The spoken language is the most natural interface for a human being and, thanks to the scientific-technological advances made in recent decades, nowadays we have voice assistance devices to interact with a machine through the use of natural language. Vocal user interfaces (VUI) are now included in many technological devices, such as desktop and laptop computers, smartphones and tablets, navigators, and home speakers, being welcomed by the market. The use of voice assistants can also be interesting and strategic in educational contexts and in public environments. This paper presents a case study based on the design, development, and assessment of a prototype devoted to assist students' during their daily activities in a smart campus context.
口语是人类最自然的界面,由于近几十年来科技的进步,如今我们有了语音辅助设备,可以通过使用自然语言与机器进行交互。语音用户界面(VUI)现在包含在许多技术设备中,如台式机和笔记本电脑,智能手机和平板电脑,导航仪和家庭扬声器,受到市场的欢迎。在教育环境和公共环境中,语音助手的使用也可以是有趣的和战略性的。本文介绍了一个基于设计、开发和评估原型的案例研究,该原型致力于帮助学生在智能校园环境中的日常活动。
{"title":"Conversational Interfaces for a Smart Campus: A Case Study","authors":"Marta Bortoli, M. Furini, S. Mirri, M. Montangero, Catia Prandi","doi":"10.1145/3399715.3399914","DOIUrl":"https://doi.org/10.1145/3399715.3399914","url":null,"abstract":"The spoken language is the most natural interface for a human being and, thanks to the scientific-technological advances made in recent decades, nowadays we have voice assistance devices to interact with a machine through the use of natural language. Vocal user interfaces (VUI) are now included in many technological devices, such as desktop and laptop computers, smartphones and tablets, navigators, and home speakers, being welcomed by the market. The use of voice assistants can also be interesting and strategic in educational contexts and in public environments. This paper presents a case study based on the design, development, and assessment of a prototype devoted to assist students' during their daily activities in a smart campus context.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128136538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Incidental Visualizations: Pre-Attentive Primitive Visual Tasks 偶然的视觉化:注意前的原始视觉任务
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399841
João Moreira, Daniel Mendes, Daniel Gonçalves
In InfoVis design, visualizations make use of pre-attentive features to highlight visual artifacts and guide users' perception into relevant information during primitive visual tasks. These are supported by visual marks such as dots, lines, and areas. However, research assumes our pre-attentive processing only allows us to detect specific features in charts. We argue that a visualization can be completely perceived pre-attentively and still convey relevant information. In this work, by combining cognitive perception and psychophysics, we executed a user study with six primitive visual tasks to verify if they could be performed pre-attentively. The tasks were to find: horizontal and vertical positions, length and slope of lines, size of areas, and color luminance intensity. Users were presented with very simple visualizations, with one encoded value at a time, allowing us to assess the accuracy and response time. Our results showed that horizontal position identification is the most accurate and fastest task to do, and the color luminance intensity identification task is the worst. We believe our study is the first step into a fresh field called Incidental Visualizations, where visualizations are meant to be seen at-a-glance, and with little effort.
在InfoVis设计中,可视化利用预先注意的特性来突出显示视觉工件,并在原始视觉任务期间引导用户感知到相关信息。这些由点、线和区域等视觉标记支持。然而,研究假设我们的前注意处理只允许我们检测图表中的特定特征。我们认为,可视化可以完全感知前注意,仍然传达相关的信息。在这项工作中,通过结合认知知觉和心理物理学,我们对六个原始视觉任务进行了用户研究,以验证它们是否可以在注意前完成。任务是找出:水平和垂直位置,线的长度和斜率,区域的大小和颜色亮度强度。用户可以看到非常简单的可视化效果,每次只显示一个编码值,这样我们就可以评估准确性和响应时间。我们的研究结果表明,水平位置识别是最准确和最快的任务,而颜色亮度强度识别是最差的任务。我们相信我们的研究是进入一个叫做“偶然可视化”的新领域的第一步,在这个领域中,可视化意味着一眼就能看到,而且不需要太多的努力。
{"title":"Incidental Visualizations: Pre-Attentive Primitive Visual Tasks","authors":"João Moreira, Daniel Mendes, Daniel Gonçalves","doi":"10.1145/3399715.3399841","DOIUrl":"https://doi.org/10.1145/3399715.3399841","url":null,"abstract":"In InfoVis design, visualizations make use of pre-attentive features to highlight visual artifacts and guide users' perception into relevant information during primitive visual tasks. These are supported by visual marks such as dots, lines, and areas. However, research assumes our pre-attentive processing only allows us to detect specific features in charts. We argue that a visualization can be completely perceived pre-attentively and still convey relevant information. In this work, by combining cognitive perception and psychophysics, we executed a user study with six primitive visual tasks to verify if they could be performed pre-attentively. The tasks were to find: horizontal and vertical positions, length and slope of lines, size of areas, and color luminance intensity. Users were presented with very simple visualizations, with one encoded value at a time, allowing us to assess the accuracy and response time. Our results showed that horizontal position identification is the most accurate and fastest task to do, and the color luminance intensity identification task is the worst. We believe our study is the first step into a fresh field called Incidental Visualizations, where visualizations are meant to be seen at-a-glance, and with little effort.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115557490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Data4Good
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3400864
Luigi De Russis, Neha Kumar, Akhil Mathur
We are witnessing unprecedented datafication of the society we live in, alongside rapid advances in the fields of Artificial Intelligence and Machine Learning. However, emergent data-driven applications are systematically discriminating against many diverse populations. A major driver of the bias are the data, which typically align with predominantly Western definitions and lack representation from multilingually diverse and resource-constrained regions across the world. Therefore, data-driven approaches can benefit from integration of a more human-centred orientation before being used to inform the design, deployment, and evaluation of technologies in various contexts. This workshop seeks to advance these and similar conversations, by inviting researchers and practitioners in interdisciplinary domains to engage in conversation around how appropriate human-centred design can contribute to addressing data-related challenges among marginalised and under-represented/underserved groups.
{"title":"Data4Good","authors":"Luigi De Russis, Neha Kumar, Akhil Mathur","doi":"10.1145/3399715.3400864","DOIUrl":"https://doi.org/10.1145/3399715.3400864","url":null,"abstract":"We are witnessing unprecedented datafication of the society we live in, alongside rapid advances in the fields of Artificial Intelligence and Machine Learning. However, emergent data-driven applications are systematically discriminating against many diverse populations. A major driver of the bias are the data, which typically align with predominantly Western definitions and lack representation from multilingually diverse and resource-constrained regions across the world. Therefore, data-driven approaches can benefit from integration of a more human-centred orientation before being used to inform the design, deployment, and evaluation of technologies in various contexts. This workshop seeks to advance these and similar conversations, by inviting researchers and practitioners in interdisciplinary domains to engage in conversation around how appropriate human-centred design can contribute to addressing data-related challenges among marginalised and under-represented/underserved groups.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115123405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Visual user interfaces for human motion 用于人体运动的视觉用户界面
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3400859
L. G. M. Ader, B. Caulfield, Benoît Bossavit, K. E. Raheb, M. Raynal, N. Vigouroux, K. Ting, Pourang Irani, J. Vanderdonckt
Visual interfaces are important in human motion to capture it, to visualize it, and to facilitate motion-based interactive systems. This workshop aims at providing a platform for researchers, designers and users to discuss the challenges related to the design of visual interfaces for motion-based interaction, in terms of visualization (e.g. graphical user interface, multimodal feedback, evaluation) and processing (e.g., data collection, treatment, interpretation, recognition) of human movement (e.g., motor skills, amplitude of movements, limitations). We will share experiences, lessons learned and elaborate tools for developing all the possible applications going forward.
视觉界面对人体运动的捕捉、可视化和促进基于运动的交互系统非常重要。本次研讨会旨在为研究人员、设计师和用户提供一个平台,讨论与基于运动交互的视觉界面设计相关的挑战,包括可视化(例如图形用户界面、多模态反馈、评估)和人类运动的处理(例如数据收集、处理、解释、识别)(例如运动技能、运动幅度、局限性)。我们将分享经验、教训和开发未来所有可能应用程序的详细工具。
{"title":"Visual user interfaces for human motion","authors":"L. G. M. Ader, B. Caulfield, Benoît Bossavit, K. E. Raheb, M. Raynal, N. Vigouroux, K. Ting, Pourang Irani, J. Vanderdonckt","doi":"10.1145/3399715.3400859","DOIUrl":"https://doi.org/10.1145/3399715.3400859","url":null,"abstract":"Visual interfaces are important in human motion to capture it, to visualize it, and to facilitate motion-based interactive systems. This workshop aims at providing a platform for researchers, designers and users to discuss the challenges related to the design of visual interfaces for motion-based interaction, in terms of visualization (e.g. graphical user interface, multimodal feedback, evaluation) and processing (e.g., data collection, treatment, interpretation, recognition) of human movement (e.g., motor skills, amplitude of movements, limitations). We will share experiences, lessons learned and elaborate tools for developing all the possible applications going forward.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122545169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interaction in Volumetric Film: An Overview 体积电影中的相互作用:综述
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399957
Krzysztof Pietroszek
Volumetric filmmaking is a novel and inherently interactive medium. In volumetric film the viewer takes over the director's responsibility for selecting the point of view from which the story is being told. The viewer becomes the cinematographer and the editor of the film at the moment of viewing. In this paper, we provide an overview of interaction modes in volumetric film and compare volumetric film to both traditional film and 360 video.
体积电影制作是一种新颖的、具有内在互动性的媒介。在体积电影中,观众取代了导演的职责,选择讲述故事的视角。观众在观看的那一刻就成了电影的摄影师和剪辑师。在本文中,我们概述了体积电影中的交互模式,并将体积电影与传统电影和360视频进行了比较。
{"title":"Interaction in Volumetric Film: An Overview","authors":"Krzysztof Pietroszek","doi":"10.1145/3399715.3399957","DOIUrl":"https://doi.org/10.1145/3399715.3399957","url":null,"abstract":"Volumetric filmmaking is a novel and inherently interactive medium. In volumetric film the viewer takes over the director's responsibility for selecting the point of view from which the story is being told. The viewer becomes the cinematographer and the editor of the film at the moment of viewing. In this paper, we provide an overview of interaction modes in volumetric film and compare volumetric film to both traditional film and 360 video.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122728687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Giving Motivation for Using Secure Credentials through User Authentication by Game 在游戏中通过用户认证提供使用安全凭据的动机
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399950
Tetsuji Takada, Yumeji Hattori
One of the issues in the knowledge-based user authentication is that users do not set and use a secure credential. Some methods exist to be able to resolve this issue, such as password policy, education, and password meter. However, these countermeasures impose a usability cost that is difficult for many users to accept. Therefore, these measures have not propelled users to use secure credentials in user authentication. We consider that motivating users is necessary to voluntarily accept the cost of using secure credentials. Thus, we attach a role-playing game function to pattern-based user authentication, and provide an incentive to users through user authentication. We conducted a small experiment with eight participants, and the result demonstrated that the prototype system has the potential to prompt users to use secure credentials.
基于知识的用户身份验证存在的问题之一是用户不设置和使用安全凭据。有一些方法可以解决这个问题,如密码策略、教育和密码计量。然而,这些对策带来了许多用户难以接受的可用性成本。因此,这些措施并没有促使用户在用户身份验证中使用安全凭据。我们认为有必要激励用户自愿接受使用安全凭据的成本。因此,我们将角色扮演游戏功能附加到基于模式的用户认证中,并通过用户认证对用户提供激励。我们对八名参与者进行了一个小实验,结果表明原型系统具有提示用户使用安全凭证的潜力。
{"title":"Giving Motivation for Using Secure Credentials through User Authentication by Game","authors":"Tetsuji Takada, Yumeji Hattori","doi":"10.1145/3399715.3399950","DOIUrl":"https://doi.org/10.1145/3399715.3399950","url":null,"abstract":"One of the issues in the knowledge-based user authentication is that users do not set and use a secure credential. Some methods exist to be able to resolve this issue, such as password policy, education, and password meter. However, these countermeasures impose a usability cost that is difficult for many users to accept. Therefore, these measures have not propelled users to use secure credentials in user authentication. We consider that motivating users is necessary to voluntarily accept the cost of using secure credentials. Thus, we attach a role-playing game function to pattern-based user authentication, and provide an incentive to users through user authentication. We conducted a small experiment with eight participants, and the result demonstrated that the prototype system has the potential to prompt users to use secure credentials.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Caarvida
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399862
A. Achberger, René Cutura, Oguzhan Türksoy, M. Sedlmair
We report on an interdisciplinary visual analytics project wherein automotive engineers analyze test drive videos. These videos are annotated with navigation-specific augmented reality (AR) content, and the engineers need to identify issues and evaluate the behavior of the underlying AR navigation system. With the increasing amount of video data, traditional analysis approaches can no longer be conducted in an acceptable timeframe. To address this issue, we collaboratively developed Caarvida, a visual analytics tool that helps engineers to accomplish their tasks faster and handle an increased number of videos. Caarvida combines automatic video analysis with interactive and visual user interfaces. We conducted two case studies which show that Caarvida successfully supports domain experts and speeds up their task completion time.
{"title":"Caarvida","authors":"A. Achberger, René Cutura, Oguzhan Türksoy, M. Sedlmair","doi":"10.1145/3399715.3399862","DOIUrl":"https://doi.org/10.1145/3399715.3399862","url":null,"abstract":"We report on an interdisciplinary visual analytics project wherein automotive engineers analyze test drive videos. These videos are annotated with navigation-specific augmented reality (AR) content, and the engineers need to identify issues and evaluate the behavior of the underlying AR navigation system. With the increasing amount of video data, traditional analysis approaches can no longer be conducted in an acceptable timeframe. To address this issue, we collaboratively developed Caarvida, a visual analytics tool that helps engineers to accomplish their tasks faster and handle an increased number of videos. Caarvida combines automatic video analysis with interactive and visual user interfaces. We conducted two case studies which show that Caarvida successfully supports domain experts and speeds up their task completion time.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116368333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Introducing Artificial Commensal Companions 介绍人工共生伴侣
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399958
M. Mancini, C. Gallagher, Radoslaw Niewiadomski, Gijs Huisman, Merijn Bruijnes
The term commensality refers to "sharing food and eating together in a social group. In this paper, we hypothesize that it would be possible to have the same kind of experience in a HCI setting, thanks to a new type of interface that we call Artificial Commensal Companion (ACC), that would be beneficial, for example, to people who voluntarily choose or are constrained to eat alone. To this aim, we introduce an interactive system implementing an ACC in the form of a robot with non-verbal socio-affective capabilities. Future tests are already planned to evaluate its influence on the eating experience of human participants.
“共栖”一词指的是“在一个社会群体中分享食物和一起吃饭”。在这篇论文中,我们假设在HCI环境中有可能有同样的体验,这要归功于一种我们称之为人工共生伴侣(ACC)的新型界面,这将是有益的,例如,对于那些自愿选择或被迫独自用餐的人。为此,我们引入了一个交互式系统,以具有非语言社会情感能力的机器人的形式实现ACC。未来的测试已经在计划中,以评估它对人类参与者饮食体验的影响。
{"title":"Introducing Artificial Commensal Companions","authors":"M. Mancini, C. Gallagher, Radoslaw Niewiadomski, Gijs Huisman, Merijn Bruijnes","doi":"10.1145/3399715.3399958","DOIUrl":"https://doi.org/10.1145/3399715.3399958","url":null,"abstract":"The term commensality refers to \"sharing food and eating together in a social group. In this paper, we hypothesize that it would be possible to have the same kind of experience in a HCI setting, thanks to a new type of interface that we call Artificial Commensal Companion (ACC), that would be beneficial, for example, to people who voluntarily choose or are constrained to eat alone. To this aim, we introduce an interactive system implementing an ACC in the form of a robot with non-verbal socio-affective capabilities. Future tests are already planned to evaluate its influence on the eating experience of human participants.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115772090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented Situated Visualization for Spatial and Context-Aware Decision-Making 增强位置可视化用于空间和上下文感知决策
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399838
R. Guarese, João Becker, Henrique Fensterseifer, M. Walter, C. Freitas, L. Nedel, Anderson Maciel
Whenever accessing indoor spaces such as classrooms or auditoriums, people might attempt to analyze and choose an appropriate place to stay while attending an event. Several criteria may be accounted for, and most are not always self-evident or trivial. This work proposes the use of data visualization allied to an Augmented Reality (AR) user interface to help users defining the most convenient seats to take. We consider sets of arbitrary demands and project information directly atop the seats and all around the room. Users can also narrow down the search by switching and combining the attributes being displayed, e.g., temperature, wheelchair accessibility. The proposed approach was tested against a comparable 2D interactive visualization of the same data in usability assessments of seat-choosing tasks with a set of users (N = 16) to validate the solution. Qualitative and quantitative data indicated that the AR-based solution is promising, suggesting that AR may help users make more accurate decisions, even in an ordinary daily task. Regarding Augmented Situated Visualization, our results open new avenues for the exploration of context-aware data.
每当进入教室或礼堂等室内空间时,人们可能会试图分析并选择适合参加活动的地方。有几个标准可以解释,大多数标准并不总是不言自明或微不足道的。这项工作提出使用与增强现实(AR)用户界面相关的数据可视化来帮助用户定义最方便的座位。我们直接考虑座椅顶部和房间周围的任意需求和项目信息。用户还可以通过切换和组合显示的属性来缩小搜索范围,例如温度、轮椅无障碍。在一组用户(N = 16)的座位选择任务可用性评估中,针对相同数据的可比2D交互式可视化测试了所提出的方法,以验证该解决方案。定性和定量数据表明,基于AR的解决方案是有前途的,这表明AR可以帮助用户做出更准确的决策,即使是在普通的日常任务中。关于增强位置可视化,我们的研究结果为探索上下文感知数据开辟了新的途径。
{"title":"Augmented Situated Visualization for Spatial and Context-Aware Decision-Making","authors":"R. Guarese, João Becker, Henrique Fensterseifer, M. Walter, C. Freitas, L. Nedel, Anderson Maciel","doi":"10.1145/3399715.3399838","DOIUrl":"https://doi.org/10.1145/3399715.3399838","url":null,"abstract":"Whenever accessing indoor spaces such as classrooms or auditoriums, people might attempt to analyze and choose an appropriate place to stay while attending an event. Several criteria may be accounted for, and most are not always self-evident or trivial. This work proposes the use of data visualization allied to an Augmented Reality (AR) user interface to help users defining the most convenient seats to take. We consider sets of arbitrary demands and project information directly atop the seats and all around the room. Users can also narrow down the search by switching and combining the attributes being displayed, e.g., temperature, wheelchair accessibility. The proposed approach was tested against a comparable 2D interactive visualization of the same data in usability assessments of seat-choosing tasks with a set of users (N = 16) to validate the solution. Qualitative and quantitative data indicated that the AR-based solution is promising, suggesting that AR may help users make more accurate decisions, even in an ordinary daily task. Regarding Augmented Situated Visualization, our results open new avenues for the exploration of context-aware data.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115784957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the International Conference on Advanced Visual Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1