首页 > 最新文献

Proceedings of the 2022 International Conference on Advanced Visual Interfaces最新文献

英文 中文
CoPDA 2022 - Cultures of Participation in the Digital Age: AI for Humans or Humans for AI? CoPDA 2022 -数字时代的参与文化:人工智能为人类还是人类为人工智能?
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3535262
B. R. Barricelli, G. Fischer, D. Fogli, A. Mørch, A. Piccinno, S. Valtolina
The sixth edition of the CoPDA workshop is dedicated to discussing the current challenges and opportunities of Cultures of Participation with respect to Artificial Intelligence (AI) by contrasting it with the objectives pursued by Human-Centered Design (HCD). The workshop aims to establish a forum to explore our basic assumption (and to provide at least partial evidence) that the most successful AI systems out there today are dependent on teams of humans, just as humans depend on these systems to gain access to information, provide insights and perform tasks beyond their own capabilities.
第六届CoPDA研讨会致力于通过将人工智能(AI)与以人为本的设计(HCD)所追求的目标进行对比,讨论当前人工智能(AI)参与文化的挑战和机遇。研讨会旨在建立一个论坛,探讨我们的基本假设(并至少提供部分证据),即当今最成功的人工智能系统依赖于人类团队,就像人类依赖这些系统来获取信息、提供见解和执行超出自身能力的任务一样。
{"title":"CoPDA 2022 - Cultures of Participation in the Digital Age: AI for Humans or Humans for AI?","authors":"B. R. Barricelli, G. Fischer, D. Fogli, A. Mørch, A. Piccinno, S. Valtolina","doi":"10.1145/3531073.3535262","DOIUrl":"https://doi.org/10.1145/3531073.3535262","url":null,"abstract":"The sixth edition of the CoPDA workshop is dedicated to discussing the current challenges and opportunities of Cultures of Participation with respect to Artificial Intelligence (AI) by contrasting it with the objectives pursued by Human-Centered Design (HCD). The workshop aims to establish a forum to explore our basic assumption (and to provide at least partial evidence) that the most successful AI systems out there today are dependent on teams of humans, just as humans depend on these systems to gain access to information, provide insights and perform tasks beyond their own capabilities.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115312399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring a Multi-Device Immersive Learning Environment 探索多设备沉浸式学习环境
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534485
T. Onorati, P. Díaz, Telmo Zarraonandia, I. Aedo
Though virtual reality has been used for more than one decade to support learning, technology is now mature and cheap enough, and students have the required digital fluency to reach real settings. Immersive technologies have also demonstrated that they not only are engaging, but they can also reinforce learning and improve memory. This work presents a preliminary study on the advantages of using an immersive experience to help young students understand genetic editing techniques. We have relied upon the CHIC Immersive Bubble Chart, a VR (Virtual Reality) multi-device visualization of the most relevant topics in the domain. We tested the CHIC Immersive Bubble Chart by asking a group of 29 students to explore the information space by interacting with two different devices: a desktop and a VR headset. The results show that they mainly preferred the VR headset finding it more engaging and useful. As a matter of fact, during the evaluation, the students kept exploring the space even after the assigned time slot.
尽管虚拟现实技术用于支持学习已有十多年的历史,但现在技术已经足够成熟和便宜,学生们已经具备了达到真实环境所需的数字流畅性。沉浸式技术也证明了它们不仅具有吸引力,而且还可以加强学习和提高记忆力。这项工作提出了使用身临其境的体验,以帮助年轻学生了解基因编辑技术的优势的初步研究。我们依靠CHIC沉浸式气泡图,这是一种VR(虚拟现实)多设备可视化的领域中最相关的主题。我们测试了CHIC沉浸式气泡图,要求29名学生通过与两种不同的设备(桌面和VR耳机)交互来探索信息空间。结果显示,他们主要更喜欢VR头显,因为它更吸引人,也更有用。事实上,在评估过程中,即使在指定的时间段之后,学生们也在继续探索空间。
{"title":"Exploring a Multi-Device Immersive Learning Environment","authors":"T. Onorati, P. Díaz, Telmo Zarraonandia, I. Aedo","doi":"10.1145/3531073.3534485","DOIUrl":"https://doi.org/10.1145/3531073.3534485","url":null,"abstract":"Though virtual reality has been used for more than one decade to support learning, technology is now mature and cheap enough, and students have the required digital fluency to reach real settings. Immersive technologies have also demonstrated that they not only are engaging, but they can also reinforce learning and improve memory. This work presents a preliminary study on the advantages of using an immersive experience to help young students understand genetic editing techniques. We have relied upon the CHIC Immersive Bubble Chart, a VR (Virtual Reality) multi-device visualization of the most relevant topics in the domain. We tested the CHIC Immersive Bubble Chart by asking a group of 29 students to explore the information space by interacting with two different devices: a desktop and a VR headset. The results show that they mainly preferred the VR headset finding it more engaging and useful. As a matter of fact, during the evaluation, the students kept exploring the space even after the assigned time slot.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115365045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video augmentation to support video-based learning 视频增强,支持基于视频的学习
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531179
Ilaria Torre, Ilenia Galluccio, M. Coccoli
Multimedia content and video-based learning are expected to take a central role in the post-pandemic world. Thus, providing new advanced interfaces and services that further exploit their potential becomes of paramount importance. A challenging area deals with developing intelligent visual interfaces that integrate the knowledge extracted from multimedia materials into educational applications. In this respect, we designed a web-based video player that is aimed to support video consumption by exploiting the knowledge extracted from the video in terms of concepts explained in the video and prerequisite relations between them. This knowledge is used to augment the video lesson through visual feedback methods. Specifically, in this paper we investigate the use of two types of visual feedback, i.e. an augmented transcript and a dynamic concept map (map of concept’s flow), to improve video comprehension in the first-watch learning context. Our preliminary findings suggest that both the methods help the learner to focus on the relevant concepts and their related contents. The augmented transcript has an higher impact on immediate comprehension compared to the map of concepts’ flow, even though the latter is expected to be more powerful to support other tasks such as exploration and in-depth analysis of the concepts in the video.
预计多媒体内容和基于视频的学习将在大流行后的世界中发挥核心作用。因此,提供新的高级接口和服务以进一步挖掘其潜力变得至关重要。一个具有挑战性的领域是开发智能视觉界面,将从多媒体材料中提取的知识集成到教育应用中。在这方面,我们设计了一个基于web的视频播放器,通过视频中解释的概念和它们之间的前提关系,利用从视频中提取的知识来支持视频消费。这些知识被用来通过视觉反馈方法来增强视频课程。具体来说,在本文中,我们研究了两种类型的视觉反馈的使用,即增强文本和动态概念图(概念流图),以提高视频理解在第一次观看的学习背景下。我们的初步研究结果表明,这两种方法都有助于学习者将注意力集中在相关概念及其相关内容上。与概念流图相比,增强文本对即时理解的影响更大,尽管后者有望更强大地支持其他任务,如探索和深入分析视频中的概念。
{"title":"Video augmentation to support video-based learning","authors":"Ilaria Torre, Ilenia Galluccio, M. Coccoli","doi":"10.1145/3531073.3531179","DOIUrl":"https://doi.org/10.1145/3531073.3531179","url":null,"abstract":"Multimedia content and video-based learning are expected to take a central role in the post-pandemic world. Thus, providing new advanced interfaces and services that further exploit their potential becomes of paramount importance. A challenging area deals with developing intelligent visual interfaces that integrate the knowledge extracted from multimedia materials into educational applications. In this respect, we designed a web-based video player that is aimed to support video consumption by exploiting the knowledge extracted from the video in terms of concepts explained in the video and prerequisite relations between them. This knowledge is used to augment the video lesson through visual feedback methods. Specifically, in this paper we investigate the use of two types of visual feedback, i.e. an augmented transcript and a dynamic concept map (map of concept’s flow), to improve video comprehension in the first-watch learning context. Our preliminary findings suggest that both the methods help the learner to focus on the relevant concepts and their related contents. The augmented transcript has an higher impact on immediate comprehension compared to the map of concepts’ flow, even though the latter is expected to be more powerful to support other tasks such as exploration and in-depth analysis of the concepts in the video.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123560244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Implicit Interaction Approach for Car-related Tasks On Smartphone Applications - A Demo 智能手机应用中汽车相关任务的隐式交互方法-演示
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534465
Alba Bisante, Venkata Srikanth Varma Datla, Stefano Zeppieri, Emanuele Panizzi
Implicit interaction is a possible approach to improve the user experience of smartphone apps in car-related environments. Indeed, it can enhance safety and avoids unnecessary and repetitive interactions on the user’s part. This demo paper presents a smartphone app based on an implicit interaction approach to detect when the user enters and exits their vehicle automatically. We describe the app interface and usage, and how we plan to demonstrate its performances during the conference demo session.
隐式交互是改善智能手机应用程序在汽车相关环境中的用户体验的一种可能方法。事实上,它可以提高安全性,并避免用户不必要和重复的交互。本演示论文介绍了一个基于隐式交互方法的智能手机应用程序,该方法可以自动检测用户何时进入和退出车辆。我们描述了应用程序的界面和用法,以及我们计划如何在会议演示期间展示其性能。
{"title":"Implicit Interaction Approach for Car-related Tasks On Smartphone Applications - A Demo","authors":"Alba Bisante, Venkata Srikanth Varma Datla, Stefano Zeppieri, Emanuele Panizzi","doi":"10.1145/3531073.3534465","DOIUrl":"https://doi.org/10.1145/3531073.3534465","url":null,"abstract":"Implicit interaction is a possible approach to improve the user experience of smartphone apps in car-related environments. Indeed, it can enhance safety and avoids unnecessary and repetitive interactions on the user’s part. This demo paper presents a smartphone app based on an implicit interaction approach to detect when the user enters and exits their vehicle automatically. We describe the app interface and usage, and how we plan to demonstrate its performances during the conference demo session.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125359438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Humans in (Digital) Space: Representing Humans in Virtual Environments (数字)空间中的人:虚拟环境中的人
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531172
M. Lycett, Alex Reppel
Technology continues to pervade social and organizational life (e.g., immersive, and artificial intelligence) and our environments become increasingly virtual. In this context we examine the challenges of creating believable virtual human experiences— photo-realistic digital imitations of ourselves that can act as proxies capable of navigating complex virtual environments while demonstrating autonomous behavior. We first develop a framework for discussion, then use that to explore the state-of-the-art in the context of human-like experience, autonomous behavior, and expansive environments. Last, we consider the key research challenges that emerge from review as a call to action.
技术继续渗透到社会和组织生活中(例如,沉浸式和人工智能),我们的环境变得越来越虚拟。在这种背景下,我们研究了创造可信的虚拟人类体验的挑战-我们自己的照片逼真的数字模拟,可以作为能够导航复杂虚拟环境的代理,同时展示自主行为。我们首先开发了一个讨论框架,然后用它来探索人类体验、自主行为和广阔环境下的最新技术。最后,我们认为从审查中出现的关键研究挑战是行动的呼吁。
{"title":"Humans in (Digital) Space: Representing Humans in Virtual Environments","authors":"M. Lycett, Alex Reppel","doi":"10.1145/3531073.3531172","DOIUrl":"https://doi.org/10.1145/3531073.3531172","url":null,"abstract":"Technology continues to pervade social and organizational life (e.g., immersive, and artificial intelligence) and our environments become increasingly virtual. In this context we examine the challenges of creating believable virtual human experiences— photo-realistic digital imitations of ourselves that can act as proxies capable of navigating complex virtual environments while demonstrating autonomous behavior. We first develop a framework for discussion, then use that to explore the state-of-the-art in the context of human-like experience, autonomous behavior, and expansive environments. Last, we consider the key research challenges that emerge from review as a call to action.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128050449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
OCFER-Net: Recognizing Facial Expression in Online Learning System OCFER-Net:在线学习系统中的面部表情识别
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534470
Yi Huo, L. Zhang
Recently, online learning is very popular, especially under the global epidemic of COVID-19. Besides knowledge distribution, emotion interaction is also very important. It can be obtained by employing Facial Expression Recognition (FER). Since the FER accuracy is substantial in assisting teachers to acquire the emotional situation, the project explores a series of FER methods and finds that few works engage in exploiting the orthogonality of convolutional matrix. Therefore, it enforces orthogonality on kernels by a regularizer, which extracts features with more diversity and expressiveness, and delivers OCFER-Net. Experiments are carried out on FER-2013, which is a challenging dataset. Results show superior performance over baselines by 1.087. The code of the research project is publicly available on https://github.com/YeeHoran/OCFERNet..
最近,在线学习非常流行,特别是在全球流行的COVID-19下。除了知识的分配,情感的互动也很重要。它可以通过面部表情识别(FER)来获得。由于FER的准确性在帮助教师获取情绪情境方面具有重要意义,本项目探索了一系列的FER方法,发现很少有作品利用卷积矩阵的正交性。因此,它通过正则化器对核进行正交性强化,从而提取出更具多样性和表现力的特征,从而实现OCFER-Net。实验是在fer2013数据集上进行的,这是一个具有挑战性的数据集。结果显示,性能优于基线1.087。该研究项目的代码可在https://github.com/YeeHoran/OCFERNet上公开获取。
{"title":"OCFER-Net: Recognizing Facial Expression in Online Learning System","authors":"Yi Huo, L. Zhang","doi":"10.1145/3531073.3534470","DOIUrl":"https://doi.org/10.1145/3531073.3534470","url":null,"abstract":"Recently, online learning is very popular, especially under the global epidemic of COVID-19. Besides knowledge distribution, emotion interaction is also very important. It can be obtained by employing Facial Expression Recognition (FER). Since the FER accuracy is substantial in assisting teachers to acquire the emotional situation, the project explores a series of FER methods and finds that few works engage in exploiting the orthogonality of convolutional matrix. Therefore, it enforces orthogonality on kernels by a regularizer, which extracts features with more diversity and expressiveness, and delivers OCFER-Net. Experiments are carried out on FER-2013, which is a challenging dataset. Results show superior performance over baselines by 1.087. The code of the research project is publicly available on https://github.com/YeeHoran/OCFERNet..","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120994519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhancing Human-AI (H-AI) Collaboration On Design Tasks Using An Interactive Text/Voice Artificial Intelligence (AI) Agent 使用交互式文本/语音人工智能(AI)代理增强人类-人工智能(H-AI)在设计任务上的协作
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534478
Joseph Makokha
In this presentation, we demonstrate a way to develop a class of AI systems, the Disruptive Interjector (DI), which observe what a human is doing, then interject with suggestions that aid in idea generation or problem solving in a human-AI (H-AI) team; something that goes beyond current creativity support systems by replacing a human-human (H-H) team with a H-AI one. The proposed DI is distinct from tutors, chatbots, recommenders and other similar systems since they seek to diverge from a solution (rather than converge towards one) by encouraging consideration of other possibilities. We develop a conceptual design of the system, then present examples from deep Convolution Neural Networks[1,7] learning models. The first example shows results from a model that was trained on an open-source dataset (publicly available online) of a community technical support chat transcripts, while the second one was trained on a design-focused dataset obtained from transcripts of experts engaged in engineering design problem solving (unavailable publicly). Based on the results from these models, we propose the necessary improvements on models and training datasets that must be resolved in order to achieve usable and reliable collaborative text/voice systems that fall in this class of AI systems.
在这次演讲中,我们展示了一种开发一类人工智能系统的方法,即破坏性插话器(DI),它观察人类在做什么,然后在人类-人工智能(H-AI)团队中插入有助于产生想法或解决问题的建议;它超越了现有的创造力支持系统,用H-AI团队取代了人机(H-H)团队。拟议中的人工智能不同于导师、聊天机器人、推荐器和其他类似的系统,因为它们通过鼓励考虑其他可能性,寻求从一个解决方案中发散出来(而不是趋同于一个)。我们开发了系统的概念设计,然后给出了深度卷积神经网络[1,7]学习模型的示例。第一个示例显示了在社区技术支持聊天记录的开源数据集(在线公开)上训练的模型的结果,而第二个示例则是在从从事工程设计问题解决的专家的记录中获得的以设计为中心的数据集上训练的结果(不可公开)。基于这些模型的结果,我们提出了必须解决的模型和训练数据集的必要改进,以实现属于这类人工智能系统的可用和可靠的协作文本/语音系统。
{"title":"Enhancing Human-AI (H-AI) Collaboration On Design Tasks Using An Interactive Text/Voice Artificial Intelligence (AI) Agent","authors":"Joseph Makokha","doi":"10.1145/3531073.3534478","DOIUrl":"https://doi.org/10.1145/3531073.3534478","url":null,"abstract":"In this presentation, we demonstrate a way to develop a class of AI systems, the Disruptive Interjector (DI), which observe what a human is doing, then interject with suggestions that aid in idea generation or problem solving in a human-AI (H-AI) team; something that goes beyond current creativity support systems by replacing a human-human (H-H) team with a H-AI one. The proposed DI is distinct from tutors, chatbots, recommenders and other similar systems since they seek to diverge from a solution (rather than converge towards one) by encouraging consideration of other possibilities. We develop a conceptual design of the system, then present examples from deep Convolution Neural Networks[1,7] learning models. The first example shows results from a model that was trained on an open-source dataset (publicly available online) of a community technical support chat transcripts, while the second one was trained on a design-focused dataset obtained from transcripts of experts engaged in engineering design problem solving (unavailable publicly). Based on the results from these models, we propose the necessary improvements on models and training datasets that must be resolved in order to achieve usable and reliable collaborative text/voice systems that fall in this class of AI systems.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130335355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-user Development and Closed-Reading: an Initial Investigation 终端用户开发和闭式阅读:初步调查
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531128
Sevda Abdollahinami, L. Ducceschi, M. Zancanaro
In this work, we explore the idea of designing a tool to augment the practice of closed-reading a literary text by employing end-user programming practices. The ultimate goal is to help young humanities students learn and appreciate computational thinking skills. The proposed approach is aligned with other methods of applying computer science techniques to explore literary texts (as in digital humanities) but with original goals and means. An initial design concept has been realised as a probe to prompt the discussion among humanities students and teachers. This short paper discusses the design ideas and the feedback from interviews and focus groups involving 25 participants (10 teachers in different humanities fields and 15 university students in humanities as prospective teachers and scholars).
在这项工作中,我们探索了设计一种工具的想法,通过采用最终用户编程实践来增加闭式阅读文学文本的实践。最终目标是帮助年轻的人文学科学生学习和欣赏计算思维技能。所提出的方法与应用计算机科学技术探索文学文本(如数字人文学科)的其他方法一致,但具有原始的目标和手段。最初的设计概念已经实现,作为一种探索,以促进人文学科学生和教师之间的讨论。这篇短文讨论了设计思路和25名参与者(10名不同人文学科的教师和15名人文学科的大学生作为未来的教师和学者)的访谈和焦点小组的反馈。
{"title":"End-user Development and Closed-Reading: an Initial Investigation","authors":"Sevda Abdollahinami, L. Ducceschi, M. Zancanaro","doi":"10.1145/3531073.3531128","DOIUrl":"https://doi.org/10.1145/3531073.3531128","url":null,"abstract":"In this work, we explore the idea of designing a tool to augment the practice of closed-reading a literary text by employing end-user programming practices. The ultimate goal is to help young humanities students learn and appreciate computational thinking skills. The proposed approach is aligned with other methods of applying computer science techniques to explore literary texts (as in digital humanities) but with original goals and means. An initial design concept has been realised as a probe to prompt the discussion among humanities students and teachers. This short paper discusses the design ideas and the feedback from interviews and focus groups involving 25 participants (10 teachers in different humanities fields and 15 university students in humanities as prospective teachers and scholars).","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128333641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Extended Reality Multi-Robot Ground Control Stations 探索扩展现实多机器人地面控制站
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534469
Bryson Lawton, F. Maurer
This paper presents work-in-progress research exploring the use of extended reality headsets to overcome the intrinsic limitations of conventional, screen-based ground control stations. Specifically, we discuss an extended reality ground control station prototype developed to explore how the strengths of these immersive technologies can be leveraged to improve 3D information visualization, workspace scalability, natural interaction methods, and system mobility for multi-robot ground control stations.
本文介绍了正在进行的研究,探索使用扩展现实耳机来克服传统的基于屏幕的地面控制站的固有局限性。具体来说,我们讨论了一个扩展现实地面控制站原型,以探索如何利用这些沉浸式技术的优势来改善多机器人地面控制站的3D信息可视化、工作空间可扩展性、自然交互方法和系统移动性。
{"title":"Exploring Extended Reality Multi-Robot Ground Control Stations","authors":"Bryson Lawton, F. Maurer","doi":"10.1145/3531073.3534469","DOIUrl":"https://doi.org/10.1145/3531073.3534469","url":null,"abstract":"This paper presents work-in-progress research exploring the use of extended reality headsets to overcome the intrinsic limitations of conventional, screen-based ground control stations. Specifically, we discuss an extended reality ground control station prototype developed to explore how the strengths of these immersive technologies can be leveraged to improve 3D information visualization, workspace scalability, natural interaction methods, and system mobility for multi-robot ground control stations.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131744102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporting Secure Agile Development: the VIS-PRISE Tool 支持安全敏捷开发:VIS-PRISE工具
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534494
M. T. Baldassarre, Vita Santa Barletta, G. Dimauro, Domenico Gigante, A. Pagano, A. Piccinno
Privacy by Design and Security by Design are two fundamental aspects in the current technological and regulatory context. Therefore, software development must integrate these aspects and consider software security on one hand, and user-centricity from the design phase on the other. It is necessary to support the team in all stages of the software lifecycle in integrating privacy and security requirements. Taking these aspects into account, the paper presents VIS-PRISE prototype, a visual tool for supporting the design team in the secure agile development.
隐私设计和安全设计是当前技术和监管背景下的两个基本方面。因此,软件开发必须集成这些方面,一方面考虑软件安全性,另一方面从设计阶段就考虑以用户为中心。在软件生命周期的所有阶段支持团队集成隐私和安全需求是必要的。考虑到这些方面,本文提出了支持设计团队进行安全敏捷开发的可视化工具VIS-PRISE原型。
{"title":"Supporting Secure Agile Development: the VIS-PRISE Tool","authors":"M. T. Baldassarre, Vita Santa Barletta, G. Dimauro, Domenico Gigante, A. Pagano, A. Piccinno","doi":"10.1145/3531073.3534494","DOIUrl":"https://doi.org/10.1145/3531073.3534494","url":null,"abstract":"Privacy by Design and Security by Design are two fundamental aspects in the current technological and regulatory context. Therefore, software development must integrate these aspects and consider software security on one hand, and user-centricity from the design phase on the other. It is necessary to support the team in all stages of the software lifecycle in integrating privacy and security requirements. Taking these aspects into account, the paper presents VIS-PRISE prototype, a visual tool for supporting the design team in the secure agile development.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131965356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2022 International Conference on Advanced Visual Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1