首页 > 最新文献

Proceedings of the 2022 International Conference on Advanced Visual Interfaces最新文献

英文 中文
On-site or Remote Working?: An Initial Solution on How COVID-19 Pandemic May Impact Augmented Reality Users 现场工作还是远程工作?: COVID-19大流行如何影响增强现实用户的初步解决方案
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534490
Yuchong Zhang, Adam Nowak, A. Romanowski, M. Fjeld
As a cutting edge technique requiring high-precision equipment, augmented reality (AR) and its users are influenced by the ambient environment. With the tremendous effect brought by COVID-19 pandemic, most people have shifted from on-site working to remote working. In this study, we propose an initial solution to explore the impact of COVID-19 pandemic on AR users working in these two situations. We develop a prototype application facilitated with gamification process in which users are requested to play an AR game in headset both in on-site and remote working environments. This game, which is highly dependent on the ambient environment, enables people to memorize, distinguish, and place virtual objects when immersing themselves into different surroundings with distinct distractors. We envision to conduct more user studies investigating how COVID-19 affects AR users, which could lead to more in-depth studies in the future.
增强现实(AR)作为一项需要高精度设备的前沿技术,其及其用户受到周围环境的影响。随着新冠肺炎疫情带来的巨大影响,大多数人已经从现场工作转向远程工作。在本研究中,我们提出了一个初步的解决方案,以探讨COVID-19大流行对在这两种情况下工作的AR用户的影响。我们开发了一个原型应用程序,在游戏化过程中,用户被要求在现场和远程工作环境中戴着耳机玩AR游戏。这个游戏对周围环境的依赖度很高,人们沉浸在不同的环境中,有不同的干扰物,人们可以记忆、区分和放置虚拟物体。我们设想开展更多的用户研究,调查COVID-19如何影响AR用户,这可能会导致未来更深入的研究。
{"title":"On-site or Remote Working?: An Initial Solution on How COVID-19 Pandemic May Impact Augmented Reality Users","authors":"Yuchong Zhang, Adam Nowak, A. Romanowski, M. Fjeld","doi":"10.1145/3531073.3534490","DOIUrl":"https://doi.org/10.1145/3531073.3534490","url":null,"abstract":"As a cutting edge technique requiring high-precision equipment, augmented reality (AR) and its users are influenced by the ambient environment. With the tremendous effect brought by COVID-19 pandemic, most people have shifted from on-site working to remote working. In this study, we propose an initial solution to explore the impact of COVID-19 pandemic on AR users working in these two situations. We develop a prototype application facilitated with gamification process in which users are requested to play an AR game in headset both in on-site and remote working environments. This game, which is highly dependent on the ambient environment, enables people to memorize, distinguish, and place virtual objects when immersing themselves into different surroundings with distinct distractors. We envision to conduct more user studies investigating how COVID-19 affects AR users, which could lead to more in-depth studies in the future.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124244389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowledge on User-Defined Gestures RepliGES和GEStory:系统化和巩固用户定义手势知识的可视化工具
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531112
Bogdan-Florin Gheran, Santiago Villarreal-Narvaez, Radu-Daniel Vatavu, J. Vanderdonckt
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies.
手势启发研究(GES)所积累的知识体系虽然有用、庞大、广泛,但也存在异质性,分散在不同场所和研究领域的科学文献中,难以推广到由不同手势类型、传感设备、应用和用户类别所代表的其他使用环境中。为了解决这些问题,我们引入了RepliGES,这是一个概念空间,它支持(1)手势引出研究的复制,以确认、扩展和完成以前的发现;(2)重用以前引出的手势集,以实现新的发现;(3)用新的分析方法扩展和概括以前的发现,并为新的用户群体提供用户自定义手势的巩固知识。基于RepliGES,我们介绍了GEStory,一个交互式设计空间和可视化工具,从216个已发表的手势启发研究中构建、可视化和识别用户自定义手势。
{"title":"RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowledge on User-Defined Gestures","authors":"Bogdan-Florin Gheran, Santiago Villarreal-Narvaez, Radu-Daniel Vatavu, J. Vanderdonckt","doi":"10.1145/3531073.3531112","DOIUrl":"https://doi.org/10.1145/3531073.3531112","url":null,"abstract":"The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125979564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
CueVR: Studying the Usability of Cue-based Authentication for Virtual Reality 基于线索的虚拟现实认证可用性研究
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531092
Yomna Abdelrahman, Florian Mathis, Pascal Knierim, Axel Kettler, Florian Alt, M. Khamis
Existing virtual reality (VR) authentication schemes are either slow or prone to observation attacks. We propose CueVR, a cue-based authentication scheme that is resilient against observation attacks by design since vital cues are randomly generated and only visible to the user experiencing the VR environment. We investigate three different input modalities through an in-depth usability study (N=20) and show that while authentication using CueVR is slower than the less secure baseline, it is faster than existing observation resilient cue-based schemes and VR schemes (4.151 s – 7.025 s to enter a 4-digit PIN). Our results also indicate that using the controllers’ trackpad significantly outperforms input using mid-air gestures. We conclude by discussing how visual cues can enhance the security of VR authentication while maintaining high usability. Furthermore, we show how existing real-world authentication schemes combined with VR’s unique characteristics can advance future VR authentication procedures.
现有的虚拟现实(VR)认证方案要么速度慢,要么容易受到观察攻击。我们提出了基于线索的身份验证方案CueVR,该方案可以抵御设计上的观察攻击,因为重要线索是随机生成的,只有在体验VR环境的用户才能看到。我们通过深入的可用性研究(N=20)调查了三种不同的输入方式,结果表明,虽然使用CueVR的身份验证比不太安全的基线慢,但它比现有的基于观察弹性线索的方案和VR方案快(输入4位PIN的时间为4.151秒- 7.025秒)。我们的研究结果还表明,使用控制器的触控板明显优于使用空中手势的输入。最后,我们讨论了视觉线索如何在保持高可用性的同时增强VR身份验证的安全性。此外,我们还展示了现有的现实世界认证方案如何结合VR的独特特性来推进未来的VR认证过程。
{"title":"CueVR: Studying the Usability of Cue-based Authentication for Virtual Reality","authors":"Yomna Abdelrahman, Florian Mathis, Pascal Knierim, Axel Kettler, Florian Alt, M. Khamis","doi":"10.1145/3531073.3531092","DOIUrl":"https://doi.org/10.1145/3531073.3531092","url":null,"abstract":"Existing virtual reality (VR) authentication schemes are either slow or prone to observation attacks. We propose CueVR, a cue-based authentication scheme that is resilient against observation attacks by design since vital cues are randomly generated and only visible to the user experiencing the VR environment. We investigate three different input modalities through an in-depth usability study (N=20) and show that while authentication using CueVR is slower than the less secure baseline, it is faster than existing observation resilient cue-based schemes and VR schemes (4.151 s – 7.025 s to enter a 4-digit PIN). Our results also indicate that using the controllers’ trackpad significantly outperforms input using mid-air gestures. We conclude by discussing how visual cues can enhance the security of VR authentication while maintaining high usability. Furthermore, we show how existing real-world authentication schemes combined with VR’s unique characteristics can advance future VR authentication procedures.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"56 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132025380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
IXCI:The Immersive eXperimenter Control Interface IXCI:沉浸式实验控制界面
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534489
Alejandro Rey Lopez, Andrea Bellucci, P. Díaz, I. Cuevas
Standalone Head-Mounted Displays open up new possibilities to study full-room, bodily interactions in immersive environments. User studies with these devices, however, are complex to design, setup and run, since remote access is required to track the user view and actions, detect errors and perform adjustments during experimental sessions. We designed IXCI as a research support tool to streamline immersive user studies by explicitly addressing these issues. Through a use case, we show how the tool effectively supports the researcher in tasks such as remote debugging, rapid prototyping, run-time parameter configuration, monitoring participants’ progress, providing help to users, recovering from errors and minimizing data loss.
独立头戴式显示器为在沉浸式环境中研究整个房间的身体互动开辟了新的可能性。然而,使用这些设备进行用户研究的设计、设置和运行都很复杂,因为需要远程访问来跟踪用户的视图和操作、检测错误并在实验期间进行调整。我们将IXCI设计为一个研究支持工具,通过明确解决这些问题来简化沉浸式用户研究。通过一个用例,我们展示了该工具如何有效地支持研究人员的任务,如远程调试、快速原型设计、运行时参数配置、监控参与者的进度、向用户提供帮助、从错误中恢复和最大限度地减少数据丢失。
{"title":"IXCI:The Immersive eXperimenter Control Interface","authors":"Alejandro Rey Lopez, Andrea Bellucci, P. Díaz, I. Cuevas","doi":"10.1145/3531073.3534489","DOIUrl":"https://doi.org/10.1145/3531073.3534489","url":null,"abstract":"Standalone Head-Mounted Displays open up new possibilities to study full-room, bodily interactions in immersive environments. User studies with these devices, however, are complex to design, setup and run, since remote access is required to track the user view and actions, detect errors and perform adjustments during experimental sessions. We designed IXCI as a research support tool to streamline immersive user studies by explicitly addressing these issues. Through a use case, we show how the tool effectively supports the researcher in tasks such as remote debugging, rapid prototyping, run-time parameter configuration, monitoring participants’ progress, providing help to users, recovering from errors and minimizing data loss.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126303861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tangible VR Lab: Studying User Interactionin Space and Time Morphing Scenarios 有形虚拟现实实验室:研究用户交互在空间和时间变形的场景
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531182
Ana Rebelo, Rui Nóbrega, F. Birra
Virtual Reality (VR) offers an interesting interface for exploring changes in space and time that otherwise could be difficult to simulate in the real world. It becomes possible to distort the virtual world by increasing or diminishing distances, as well as to play with time delays. In this way, it is possible to create different spatiotemporal conditions to study different interaction techniques to analyse which ones are more suitable for each task. Related work has revealed an easy adaptation of human beings to interaction methods dissimilar from everyday conditions. In particular, hyperbolic spaces have shown unique properties for intuitive navigation over an area seemingly larger and less restricted than Euclidean spaces. Research on delay tolerance also suggests humans’ inability to detect slight discrepancies between visual and proprioceptive sensory information during the interaction. This work aims to create a tangible Virtual Environment (VE) to explore users’ adaptability in spatiotemporal distortion scenarios. As a case study, we restricted the scope of the investigation to two morphing scenarios. The Space Morphing Scenario compares the adaptability of users to hyperbolic versus Euclidean spaces. The Time Morphing Scenario intends to ascertain from which visual delay values the task performance is affected. The results showed significant differences between Euclidean space and hyperbolic space. Regarding the visual feedback, although participants find the task more difficult with delay values starting at 500 ms, the results show a decrease in performance as early as the 200 ms delay.
虚拟现实(VR)为探索空间和时间的变化提供了一个有趣的界面,否则在现实世界中很难模拟。我们可以通过增加或减少距离来扭曲虚拟世界,也可以玩时间延迟。这样,就有可能创造不同的时空条件来研究不同的交互技术,以分析哪一种更适合每个任务。相关工作表明,人类很容易适应与日常环境不同的互动方式。特别是,双曲空间显示出独特的特性,可以直观地导航一个看起来比欧几里得空间更大、限制更少的区域。对延迟容忍的研究也表明,在交互过程中,人类无法发现视觉和本体感觉信息之间的细微差异。本作品旨在创造一个有形的虚拟环境(VE)来探索用户在时空扭曲场景下的适应性。作为案例研究,我们将调查范围限制为两个变形场景。空间变形场景比较了用户对双曲空间和欧几里得空间的适应性。时间变形场景旨在确定哪些视觉延迟值会影响任务的执行。结果表明,欧几里得空间与双曲空间存在显著差异。在视觉反馈方面,虽然参与者发现延迟值从500 ms开始时任务难度增加,但结果显示早在200 ms延迟时表现就有所下降。
{"title":"Tangible VR Lab: Studying User Interactionin Space and Time Morphing Scenarios","authors":"Ana Rebelo, Rui Nóbrega, F. Birra","doi":"10.1145/3531073.3531182","DOIUrl":"https://doi.org/10.1145/3531073.3531182","url":null,"abstract":"Virtual Reality (VR) offers an interesting interface for exploring changes in space and time that otherwise could be difficult to simulate in the real world. It becomes possible to distort the virtual world by increasing or diminishing distances, as well as to play with time delays. In this way, it is possible to create different spatiotemporal conditions to study different interaction techniques to analyse which ones are more suitable for each task. Related work has revealed an easy adaptation of human beings to interaction methods dissimilar from everyday conditions. In particular, hyperbolic spaces have shown unique properties for intuitive navigation over an area seemingly larger and less restricted than Euclidean spaces. Research on delay tolerance also suggests humans’ inability to detect slight discrepancies between visual and proprioceptive sensory information during the interaction. This work aims to create a tangible Virtual Environment (VE) to explore users’ adaptability in spatiotemporal distortion scenarios. As a case study, we restricted the scope of the investigation to two morphing scenarios. The Space Morphing Scenario compares the adaptability of users to hyperbolic versus Euclidean spaces. The Time Morphing Scenario intends to ascertain from which visual delay values the task performance is affected. The results showed significant differences between Euclidean space and hyperbolic space. Regarding the visual feedback, although participants find the task more difficult with delay values starting at 500 ms, the results show a decrease in performance as early as the 200 ms delay.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126555506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Multi-Modal Approach to Creating Routines for Smart Speakers 为智能扬声器创建例程的多模态方法
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531168
B. R. Barricelli, D. Fogli, Letizia Iemmolo, A. Locoro
Smart speakers can execute user-defined routines, namely, sequences of actions triggered by specific events or conditions. This paper presents a new approach to the creation of routines, which leverages the multi-modal features (vision, speech, and touch) offered by Amazon Alexa running on Echo Show devices. It then illustrates how end users found easier to create routines with the proposed approach than with the usual interaction with the Alexa app.
智能音箱可以执行用户定义的例程,即由特定事件或条件触发的一系列动作。本文提出了一种创建例程的新方法,该方法利用了在Echo Show设备上运行的Amazon Alexa提供的多模式功能(视觉、语音和触摸)。然后,它说明了最终用户如何发现使用拟议的方法比与Alexa应用程序的通常交互更容易创建例程。
{"title":"A Multi-Modal Approach to Creating Routines for Smart Speakers","authors":"B. R. Barricelli, D. Fogli, Letizia Iemmolo, A. Locoro","doi":"10.1145/3531073.3531168","DOIUrl":"https://doi.org/10.1145/3531073.3531168","url":null,"abstract":"Smart speakers can execute user-defined routines, namely, sequences of actions triggered by specific events or conditions. This paper presents a new approach to the creation of routines, which leverages the multi-modal features (vision, speech, and touch) offered by Amazon Alexa running on Echo Show devices. It then illustrates how end users found easier to create routines with the proposed approach than with the usual interaction with the Alexa app.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"83 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120919686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Phygital Toolkit for Rapidly Designing Smart Things at School 在学校快速设计智能设备的数字工具包
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531119
R. Gennari, M. Matera, Diego Morra, Mehdi Rizvi
Designing smart things, so as to ideate and program them, is a complex process and an empowerment opportunity. Toolkits can help non-experts engage in their design, that is, in their ideation and programming. This paper presents the IoTgo toolkit, made of card-based material and digital components. Playing with IoTgo can help understand the design context, ideate smart things for it, make them communicate and program them with patterns. In recent years, IoTgo was used and evolved at a distance with few participants. This paper presents a study with it, for the first time in presence, in a high-school. Results are discussed and suggest what role IoTgo or similar toolkits can play in the rapid design of smart things with non-seasoned designers or programmers.
设计智能事物,从而对其进行构思和编程,是一个复杂的过程,也是一个赋权的机会。工具包可以帮助非专家参与设计,也就是说,参与构思和编程。本文介绍了由卡片材料和数字元件组成的IoTgo工具箱。使用IoTgo可以帮助理解设计环境,为其构思智能的东西,使它们与模式进行交流和编程。近年来,IoTgo的使用和发展与少数参与者保持距离。本文首次在一所高中对其进行了研究。讨论了结果,并建议IoTgo或类似的工具包在非经验丰富的设计师或程序员的智能事物的快速设计中发挥什么作用。
{"title":"A Phygital Toolkit for Rapidly Designing Smart Things at School","authors":"R. Gennari, M. Matera, Diego Morra, Mehdi Rizvi","doi":"10.1145/3531073.3531119","DOIUrl":"https://doi.org/10.1145/3531073.3531119","url":null,"abstract":"Designing smart things, so as to ideate and program them, is a complex process and an empowerment opportunity. Toolkits can help non-experts engage in their design, that is, in their ideation and programming. This paper presents the IoTgo toolkit, made of card-based material and digital components. Playing with IoTgo can help understand the design context, ideate smart things for it, make them communicate and program them with patterns. In recent years, IoTgo was used and evolved at a distance with few participants. This paper presents a study with it, for the first time in presence, in a high-school. Results are discussed and suggest what role IoTgo or similar toolkits can play in the rapid design of smart things with non-seasoned designers or programmers.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visuo-Haptic Interaction Visuo-Haptic交互
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3535260
Katrin Wolf, Marco Kurzweg, Yannick Weiss, S. Brewster, Albrecht Schmidt
While traditional interfaces in human-computer interaction mainly rely on vision and audio, haptics becomes more and more important. Haptics cannot only increase the user experience and make technology more immersive, it can also transmit information that is hard to interpret only through vision and audio, such as the softness of a surface or other material properties. In this workshop, we aim at discussing how we could interact with technology if haptics is strongly supported and which novel research areas could emerge.
传统的人机交互界面主要依靠视觉和听觉,而触觉变得越来越重要。触觉不仅可以增加用户体验,使技术更具沉浸感,还可以传递仅通过视觉和音频难以解释的信息,例如表面的柔软度或其他材料属性。在这个研讨会上,我们的目标是讨论如果触觉得到大力支持,我们如何与技术互动,以及哪些新的研究领域可能出现。
{"title":"Visuo-Haptic Interaction","authors":"Katrin Wolf, Marco Kurzweg, Yannick Weiss, S. Brewster, Albrecht Schmidt","doi":"10.1145/3531073.3535260","DOIUrl":"https://doi.org/10.1145/3531073.3535260","url":null,"abstract":"While traditional interfaces in human-computer interaction mainly rely on vision and audio, haptics becomes more and more important. Haptics cannot only increase the user experience and make technology more immersive, it can also transmit information that is hard to interpret only through vision and audio, such as the softness of a surface or other material properties. In this workshop, we aim at discussing how we could interact with technology if haptics is strongly supported and which novel research areas could emerge.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132727406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototyping InContext: Exploring New Paradigms in User Experience Tools 情境中的原型:探索用户体验工具的新范例
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531175
A. R. L. Carter, M. Sturdee, Alan J. Dix
The technologies we use in everyday contexts are designed and tested, using existing standards of usability. As technology advances standards are still based on planar displays and simple screen-based interactions. End-user digital devices need to consider context and physicality as additional influences on design. Additionally, accessibility and multi-modal interaction must be considered as we build technologies with interactions such as soundscapes to support user experience. When considering the tools we use to design existing interactions, we can evaluate new ways of working with software to support the development of the changing face of interactive devices. This paper presents two prototypes which explore the space of user experience design tools, first in the space of contextual cues when looking at multi device interaction, and second, in the space of physical prototyping. These prototypes are starting points for a wider discussion around the changing face of usability. We also discuss extending the scope of existing user experience design tools and rethinking what ”user experience” means when the devices we own are becoming ’aware’ of their surroundings, context, and have increasing agency.
我们在日常环境中使用的技术是根据现有的可用性标准设计和测试的。随着技术的进步,标准仍然基于平面显示和简单的基于屏幕的交互。终端用户数字设备需要考虑环境和物理特性对设计的额外影响。此外,当我们构建带有声音场景等交互的技术以支持用户体验时,必须考虑可访问性和多模态交互。在考虑我们用来设计现有交互的工具时,我们可以评估与软件一起工作的新方法,以支持交互设备不断变化的面貌的开发。本文展示了两个原型,它们探索了用户体验设计工具的空间,第一个是在多设备交互时的上下文线索空间,第二个是在物理原型空间。这些原型是围绕可用性不断变化的面貌展开更广泛讨论的起点。我们还讨论了扩展现有用户体验设计工具的范围,并重新思考当我们拥有的设备变得“意识到”周围环境、背景并具有越来越多的代理时,“用户体验”意味着什么。
{"title":"Prototyping InContext: Exploring New Paradigms in User Experience Tools","authors":"A. R. L. Carter, M. Sturdee, Alan J. Dix","doi":"10.1145/3531073.3531175","DOIUrl":"https://doi.org/10.1145/3531073.3531175","url":null,"abstract":"The technologies we use in everyday contexts are designed and tested, using existing standards of usability. As technology advances standards are still based on planar displays and simple screen-based interactions. End-user digital devices need to consider context and physicality as additional influences on design. Additionally, accessibility and multi-modal interaction must be considered as we build technologies with interactions such as soundscapes to support user experience. When considering the tools we use to design existing interactions, we can evaluate new ways of working with software to support the development of the changing face of interactive devices. This paper presents two prototypes which explore the space of user experience design tools, first in the space of contextual cues when looking at multi device interaction, and second, in the space of physical prototyping. These prototypes are starting points for a wider discussion around the changing face of usability. We also discuss extending the scope of existing user experience design tools and rethinking what ”user experience” means when the devices we own are becoming ’aware’ of their surroundings, context, and have increasing agency.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133185482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeBORAh: A Web-Based Cross-Device Orchestration Layer 基于web的跨设备编排层
Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534483
L. Vandenabeele, Hoorieh Afkari, J. Hermen, Louis Deladiennée, Christian Moll, V. Maquil
In this paper, we present DeBORAh, a front-end, web-based software layer supporting orchestration of interactive spaces combining multiple devices and screens. DeBORAh uses a flexible approach to decompose multimedia content into containers and webpages, so that they can be displayed in various ways across devices. We describe the concept and implementation of DeBORAh and explain how it has been used to instantiate two different cross-device scenarios.
在本文中,我们提出了DeBORAh,这是一个基于web的前端软件层,支持组合多个设备和屏幕的交互空间编排。DeBORAh使用一种灵活的方法将多媒体内容分解为容器和网页,以便它们可以在不同的设备上以不同的方式显示。我们描述了DeBORAh的概念和实现,并解释了如何使用它来实例化两种不同的跨设备场景。
{"title":"DeBORAh: A Web-Based Cross-Device Orchestration Layer","authors":"L. Vandenabeele, Hoorieh Afkari, J. Hermen, Louis Deladiennée, Christian Moll, V. Maquil","doi":"10.1145/3531073.3534483","DOIUrl":"https://doi.org/10.1145/3531073.3534483","url":null,"abstract":"In this paper, we present DeBORAh, a front-end, web-based software layer supporting orchestration of interactive spaces combining multiple devices and screens. DeBORAh uses a flexible approach to decompose multimedia content into containers and webpages, so that they can be displayed in various ways across devices. We describe the concept and implementation of DeBORAh and explain how it has been used to instantiate two different cross-device scenarios.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115143120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 2022 International Conference on Advanced Visual Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1