Chengyu Su , Chao Yang , Yonghui Chen , Fupan Wang , Fang Wang , Yadong Wu , Xiaorong Zhang
{"title":"Natural multimodal interaction in immersive flow visualization","authors":"Chengyu Su , Chao Yang , Yonghui Chen , Fupan Wang , Fang Wang , Yadong Wu , Xiaorong Zhang","doi":"10.1016/j.visinf.2021.12.005","DOIUrl":null,"url":null,"abstract":"<div><p>In the immersive flow visualization based on virtual reality, how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem. In order to achieve the natural and efficient human–computer interaction, we analyze the interaction requirements of flow visualization and study the characteristics of four human–computer interaction channels: hand, head, eye and voice. We give out some multimodal interaction design suggestions and then propose three multimodal interaction methods: head & hand, head & hand & eye and head & hand & eye & voice. The freedom of gestures, the stability of the head, the convenience of eyes and the rapid retrieval of voices are used to improve the accuracy and efficiency of interaction. The interaction load is balanced by multimodal interaction to reduce fatigue. The evaluation shows that our multimodal interaction has higher accuracy, faster time efficiency and much lower fatigue than the traditional joystick interaction.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 56-66"},"PeriodicalIF":3.8000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000632/pdfft?md5=28eaf01c7c9f0094b805091c154c6864&pid=1-s2.0-S2468502X21000632-main.pdf","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X21000632","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 5
Abstract
In the immersive flow visualization based on virtual reality, how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem. In order to achieve the natural and efficient human–computer interaction, we analyze the interaction requirements of flow visualization and study the characteristics of four human–computer interaction channels: hand, head, eye and voice. We give out some multimodal interaction design suggestions and then propose three multimodal interaction methods: head & hand, head & hand & eye and head & hand & eye & voice. The freedom of gestures, the stability of the head, the convenience of eyes and the rapid retrieval of voices are used to improve the accuracy and efficiency of interaction. The interaction load is balanced by multimodal interaction to reduce fatigue. The evaluation shows that our multimodal interaction has higher accuracy, faster time efficiency and much lower fatigue than the traditional joystick interaction.