AirWhisper:通过视觉-气流多模态反馈增强虚拟现实体验

IF 2.2 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal on Multimodal User Interfaces Pub Date : 2024-08-20 DOI:10.1007/s12193-024-00438-9
Fangtao Zhao, Ziming Li, Yiming Luo, Yue Li, Hai-Ning Liang
{"title":"AirWhisper:通过视觉-气流多模态反馈增强虚拟现实体验","authors":"Fangtao Zhao, Ziming Li, Yiming Luo, Yue Li, Hai-Ning Liang","doi":"10.1007/s12193-024-00438-9","DOIUrl":null,"url":null,"abstract":"<p>Virtual reality (VR) technology has been increasingly focusing on incorporating multimodal outputs to enhance the sense of immersion and realism. In this work, we developed AirWhisper, a modular wearable device that provides dynamic airflow feedback to enhance VR experiences. AirWhisper simulates wind from multiple directions around the user’s head via four micro fans and 3D-printed attachments. We applied a Just Noticeable Difference study to support the design of the control system and explore the user’s perception of the characteristics of the airflow in different directions. Through multimodal comparison experiments, we find that vision-airflow multimodality output can improve the user’s VR experience from several perspectives. Finally, we designed scenarios with different airflow change patterns and different levels of interaction to test AirWhisper’s performance in various contexts and explore the differences in users’ perception of airflow under different virtual environment conditions. Our work shows the importance of developing human-centered multimodal feedback adaptive learning models that can make real-time dynamic changes based on the user’s perceptual characteristics and environmental features.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"26 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AirWhisper: enhancing virtual reality experience via visual-airflow multimodal feedback\",\"authors\":\"Fangtao Zhao, Ziming Li, Yiming Luo, Yue Li, Hai-Ning Liang\",\"doi\":\"10.1007/s12193-024-00438-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Virtual reality (VR) technology has been increasingly focusing on incorporating multimodal outputs to enhance the sense of immersion and realism. In this work, we developed AirWhisper, a modular wearable device that provides dynamic airflow feedback to enhance VR experiences. AirWhisper simulates wind from multiple directions around the user’s head via four micro fans and 3D-printed attachments. We applied a Just Noticeable Difference study to support the design of the control system and explore the user’s perception of the characteristics of the airflow in different directions. Through multimodal comparison experiments, we find that vision-airflow multimodality output can improve the user’s VR experience from several perspectives. Finally, we designed scenarios with different airflow change patterns and different levels of interaction to test AirWhisper’s performance in various contexts and explore the differences in users’ perception of airflow under different virtual environment conditions. Our work shows the importance of developing human-centered multimodal feedback adaptive learning models that can make real-time dynamic changes based on the user’s perceptual characteristics and environmental features.</p>\",\"PeriodicalId\":17529,\"journal\":{\"name\":\"Journal on Multimodal User Interfaces\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal on Multimodal User Interfaces\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12193-024-00438-9\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal on Multimodal User Interfaces","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12193-024-00438-9","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

虚拟现实(VR)技术越来越注重结合多模态输出,以增强沉浸感和真实感。在这项工作中,我们开发了一款模块化可穿戴设备 AirWhisper,它能提供动态气流反馈,以增强 VR 体验。AirWhisper 通过四个微型风扇和 3D 打印附件模拟来自用户头部周围多个方向的风。我们采用了 "Just Noticeable Difference "研究来支持控制系统的设计,并探索用户对不同方向气流特性的感知。通过多模态对比实验,我们发现视觉-气流多模态输出可以从多个角度改善用户的 VR 体验。最后,我们设计了不同气流变化模式和不同交互水平的场景,以测试 AirWhisper 在不同情境下的性能,并探索用户在不同虚拟环境条件下对气流感知的差异。我们的工作表明了开发以人为本的多模态反馈自适应学习模型的重要性,这种模型可以根据用户的感知特征和环境特征进行实时动态变化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AirWhisper: enhancing virtual reality experience via visual-airflow multimodal feedback

Virtual reality (VR) technology has been increasingly focusing on incorporating multimodal outputs to enhance the sense of immersion and realism. In this work, we developed AirWhisper, a modular wearable device that provides dynamic airflow feedback to enhance VR experiences. AirWhisper simulates wind from multiple directions around the user’s head via four micro fans and 3D-printed attachments. We applied a Just Noticeable Difference study to support the design of the control system and explore the user’s perception of the characteristics of the airflow in different directions. Through multimodal comparison experiments, we find that vision-airflow multimodality output can improve the user’s VR experience from several perspectives. Finally, we designed scenarios with different airflow change patterns and different levels of interaction to test AirWhisper’s performance in various contexts and explore the differences in users’ perception of airflow under different virtual environment conditions. Our work shows the importance of developing human-centered multimodal feedback adaptive learning models that can make real-time dynamic changes based on the user’s perceptual characteristics and environmental features.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal on Multimodal User Interfaces
Journal on Multimodal User Interfaces COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
6.90
自引率
3.40%
发文量
12
审稿时长
>12 weeks
期刊介绍: The Journal of Multimodal User Interfaces publishes work in the design, implementation and evaluation of multimodal interfaces. Research in the domain of multimodal interaction is by its very essence a multidisciplinary area involving several fields including signal processing, human-machine interaction, computer science, cognitive science and ergonomics. This journal focuses on multimodal interfaces involving advanced modalities, several modalities and their fusion, user-centric design, usability and architectural considerations. Use cases and descriptions of specific application areas are welcome including for example e-learning, assistance, serious games, affective and social computing, interaction with avatars and robots.
期刊最新文献
Human or robot? Exploring different avatar appearances to increase perceived security in shared automated vehicles AirWhisper: enhancing virtual reality experience via visual-airflow multimodal feedback Truck drivers’ views on the road safety benefits of advanced driver assistance systems and Intelligent Transport Systems in Tanzania In-vehicle nudging for increased Adaptive Cruise Control use: a field study Prediction of pedestrian crossing behaviour at unsignalized intersections using machine learning algorithms: analysis and comparison
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1