Sadia Rubab, Muhammad Wajeeh Uz Zaman, Umer Rashid, Lingyun Yu, Yingcai Wu
{"title":"Audio-visual training and feedback to learn touch-based gestures","authors":"Sadia Rubab, Muhammad Wajeeh Uz Zaman, Umer Rashid, Lingyun Yu, Yingcai Wu","doi":"10.1007/s12650-024-01012-x","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>To help people learn the touch-based gestures needed to perform various tasks, researchers commonly use training from an experimenter. However, it leads to dependence on a person, as well as memory problems with increasing number and complexity of gestures. Several on-demand training and feedback methods have been proposed that provide constant support and help people learn novel gestures without human assistance. Non-speech audio with the visual clue, a gesture training/feedback method, could be extended in the interactive visualization tools. However, the literature offers several options in the non-speech audio and visual clues but no comparisons. We conducted an online study to identify suitable non-speech audio representations with the visual clues of 12 touch-based gestures. For each audiovisual combination, we evaluated the thinking, time demand, frustration, understanding, and learnability of 45 participants. We found that the visual clue of a gesture, either iconic or ghost, did not affect the suitability of an audio representation. However, the preferences in audio channels and audio patterns differed for the different gestures and their directions. We implemented the training/feedback method in an Infovis tool. The evaluation showed significant use of the method by the participants to explore the tool.</p><h3 data-test=\"abstract-sub-heading\">Graphical Abstract</h3>","PeriodicalId":54756,"journal":{"name":"Journal of Visualization","volume":"21 1","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visualization","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12650-024-01012-x","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
To help people learn the touch-based gestures needed to perform various tasks, researchers commonly use training from an experimenter. However, it leads to dependence on a person, as well as memory problems with increasing number and complexity of gestures. Several on-demand training and feedback methods have been proposed that provide constant support and help people learn novel gestures without human assistance. Non-speech audio with the visual clue, a gesture training/feedback method, could be extended in the interactive visualization tools. However, the literature offers several options in the non-speech audio and visual clues but no comparisons. We conducted an online study to identify suitable non-speech audio representations with the visual clues of 12 touch-based gestures. For each audiovisual combination, we evaluated the thinking, time demand, frustration, understanding, and learnability of 45 participants. We found that the visual clue of a gesture, either iconic or ghost, did not affect the suitability of an audio representation. However, the preferences in audio channels and audio patterns differed for the different gestures and their directions. We implemented the training/feedback method in an Infovis tool. The evaluation showed significant use of the method by the participants to explore the tool.
Journal of VisualizationCOMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY
CiteScore
3.40
自引率
5.90%
发文量
79
审稿时长
>12 weeks
期刊介绍:
Visualization is an interdisciplinary imaging science devoted to making the invisible visible through the techniques of experimental visualization and computer-aided visualization.
The scope of the Journal is to provide a place to exchange information on the latest visualization technology and its application by the presentation of latest papers of both researchers and technicians.