Pub Date : 2023-11-02DOI: 10.1007/s12193-023-00425-6
Tim Ziemer, Sara Lenzi, Niklas Rönnberg, Thomas Hermann, Roberto Bresin
{"title":"Introduction to the special issue on design and perception of interactive sonification","authors":"Tim Ziemer, Sara Lenzi, Niklas Rönnberg, Thomas Hermann, Roberto Bresin","doi":"10.1007/s12193-023-00425-6","DOIUrl":"https://doi.org/10.1007/s12193-023-00425-6","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"12 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135933758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-30DOI: 10.1007/s12193-023-00424-7
Adrian B. Latupeirissa, Roberto Bresin
{"title":"Correction to: PepperOSC: enabling interactive sonification of a robot’s expressive movement","authors":"Adrian B. Latupeirissa, Roberto Bresin","doi":"10.1007/s12193-023-00424-7","DOIUrl":"https://doi.org/10.1007/s12193-023-00424-7","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136102527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.1007/s12193-023-00422-9
Ziemer, Tim
Interactive Sonification is a well-known guidance method in navigation tasks. Researchers have repeatedly suggested the use of interactive sonification in neuronavigation and image-guided surgery. The hope is to reduce clinicians' cognitive load through a relief of the visual channel, while preserving the precision provided through image guidance. In this paper, we present a surgical use case, simulating a craniotomy preparation with a skull phantom. Through auditory, visual, and audiovisual guidance, non-clinicians successfully find targets on a skull that provides hardly any visual or haptic landmarks. The results show that interactive sonification enables novice users to navigate through three-dimensional space with a high precision. The precision along the depth axis is highest in the audiovisual guidance mode, but adding audio leads to higher durations and longer motion trajectories.
{"title":"Three-dimensional sonification as a surgical guidance tool","authors":"Ziemer, Tim","doi":"10.1007/s12193-023-00422-9","DOIUrl":"https://doi.org/10.1007/s12193-023-00422-9","url":null,"abstract":"Interactive Sonification is a well-known guidance method in navigation tasks. Researchers have repeatedly suggested the use of interactive sonification in neuronavigation and image-guided surgery. The hope is to reduce clinicians' cognitive load through a relief of the visual channel, while preserving the precision provided through image guidance. In this paper, we present a surgical use case, simulating a craniotomy preparation with a skull phantom. Through auditory, visual, and audiovisual guidance, non-clinicians successfully find targets on a skull that provides hardly any visual or haptic landmarks. The results show that interactive sonification enables novice users to navigate through three-dimensional space with a high precision. The precision along the depth axis is highest in the audiovisual guidance mode, but adding audio leads to higher durations and longer motion trajectories.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"6 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136233550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-26DOI: 10.1007/s12193-023-00417-6
Weitao Jiang, Bingxin Zhang, Ruiqi Sun, Dong Zhang, Shan Hu
{"title":"A study on the attention of people with low vision to accessibility guidance signs","authors":"Weitao Jiang, Bingxin Zhang, Ruiqi Sun, Dong Zhang, Shan Hu","doi":"10.1007/s12193-023-00417-6","DOIUrl":"https://doi.org/10.1007/s12193-023-00417-6","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"5 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134908962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-21DOI: 10.1007/s12193-023-00416-7
Mariana Seiça, Licínio Roque, Pedro Martins, F. Amílcar Cardoso
Abstract The aesthetic dimension has been proposed as a potential expansion of sonification design, creating listening pieces that reach the goal of effective data communication. However, current views of aesthetics still aim at optimising mapping criteria to convey the ‘right meaning’, maintaining a mostly functional view on what is considered a successful sonification. This paper proposes an interdisciplinary approach to the aesthetics of sonification experience, grounded on theoretical foundations from phenomenology of interaction, post-phenomenology, cross-cultural studies, acoustic ecology and deep listening. From this journey, we present the following design insights: (1) the design of sonifications becomes a design for experience, (2) co-designed during the interaction with each participant; (3) the sonification artefact gains a mediating role that guides the participant’s intentions in the sonification space; (4) the aesthetics of a sonification experience generates a multistable phenomenon, offering new opportunities to experience multiple perspectives over data; (5) the interaction between human participants and the sonic emanations compose a dialogic space. A call for action to reframe the sonification field into novel design spaces is now open, with aesthetics gaining a transformational role in sonification design and interaction.
{"title":"An interdisciplinary journey towards an aesthetics of sonification experience","authors":"Mariana Seiça, Licínio Roque, Pedro Martins, F. Amílcar Cardoso","doi":"10.1007/s12193-023-00416-7","DOIUrl":"https://doi.org/10.1007/s12193-023-00416-7","url":null,"abstract":"Abstract The aesthetic dimension has been proposed as a potential expansion of sonification design, creating listening pieces that reach the goal of effective data communication. However, current views of aesthetics still aim at optimising mapping criteria to convey the ‘right meaning’, maintaining a mostly functional view on what is considered a successful sonification. This paper proposes an interdisciplinary approach to the aesthetics of sonification experience, grounded on theoretical foundations from phenomenology of interaction, post-phenomenology, cross-cultural studies, acoustic ecology and deep listening. From this journey, we present the following design insights: (1) the design of sonifications becomes a design for experience, (2) co-designed during the interaction with each participant; (3) the sonification artefact gains a mediating role that guides the participant’s intentions in the sonification space; (4) the aesthetics of a sonification experience generates a multistable phenomenon, offering new opportunities to experience multiple perspectives over data; (5) the interaction between human participants and the sonic emanations compose a dialogic space. A call for action to reframe the sonification field into novel design spaces is now open, with aesthetics gaining a transformational role in sonification design and interaction.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135463803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing alternative modalities in the context of multimodal human–robot interaction","authors":"Suprakas Saren, Abhishek Mukhopadhyay, Debasish Ghose, Pradipta Biswas","doi":"10.1007/s12193-023-00421-w","DOIUrl":"https://doi.org/10.1007/s12193-023-00421-w","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135730017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multimodal exploration in elementary music classroom","authors":"Martha Papadogianni, Ercan Altinsoy, Areti Andreopoulou","doi":"10.1007/s12193-023-00420-x","DOIUrl":"https://doi.org/10.1007/s12193-023-00420-x","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"46 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135824646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1007/s12193-023-00419-4
Luca Turchet, Simone Luiten, Tjebbe Treub, Marloes van der Burgt, Costanza Siani, Alberto Boem
Abstract Hearing loss is becoming a global problem, partly as a consequence of exposure to loud music. People may be unaware about the harmful sound levels and consequent damages caused by loud music at venues such as discotheques or festivals. Earplugs are effective in reducing the risk of noise-induced hearing loss but have been shown to be an insufficient prevention strategy. Thus, when it is not possible to lower the volume of the sound source, a viable solution to the problem is to relocate to quieter locations from time to time. In this context, this study introduces a bracelet device with the goal of warning users when the music sound level is too loud in their specific location, via haptic, visual or visuo-haptic feedback. The bracelet embeds a microphone, a microcontroller, an LED strip and four vibration motors. We performed a user study where thirteen participants were asked to react to the three kinds of feedback during a simulated disco club event where the volume of music pieces varied to reach a loud intensity. Results showed that participants never missed the above threshold notification via all types of feedback, but visual feedback led to the slowest reaction times and was deemed the least effective. In line with the findings reported in the hearing loss prevention literature, the perceived usefulness of the proposed device was highly dependent on participants’ subjective approach to the topic of hearing risks at loud music events as well as their willingness to take action regarding its prevention. Ultimately, our study shows how technology, no matter how effective, may not be able to cope with these kinds of cultural issues concerning hearing loss prevention. Educational strategies may represent a more effective solution to the real problem of changing people’s attitudes and motivations to want to protect their hearing.
{"title":"Hearing loss prevention at loud music events via real-time visuo-haptic feedback","authors":"Luca Turchet, Simone Luiten, Tjebbe Treub, Marloes van der Burgt, Costanza Siani, Alberto Boem","doi":"10.1007/s12193-023-00419-4","DOIUrl":"https://doi.org/10.1007/s12193-023-00419-4","url":null,"abstract":"Abstract Hearing loss is becoming a global problem, partly as a consequence of exposure to loud music. People may be unaware about the harmful sound levels and consequent damages caused by loud music at venues such as discotheques or festivals. Earplugs are effective in reducing the risk of noise-induced hearing loss but have been shown to be an insufficient prevention strategy. Thus, when it is not possible to lower the volume of the sound source, a viable solution to the problem is to relocate to quieter locations from time to time. In this context, this study introduces a bracelet device with the goal of warning users when the music sound level is too loud in their specific location, via haptic, visual or visuo-haptic feedback. The bracelet embeds a microphone, a microcontroller, an LED strip and four vibration motors. We performed a user study where thirteen participants were asked to react to the three kinds of feedback during a simulated disco club event where the volume of music pieces varied to reach a loud intensity. Results showed that participants never missed the above threshold notification via all types of feedback, but visual feedback led to the slowest reaction times and was deemed the least effective. In line with the findings reported in the hearing loss prevention literature, the perceived usefulness of the proposed device was highly dependent on participants’ subjective approach to the topic of hearing risks at loud music events as well as their willingness to take action regarding its prevention. Ultimately, our study shows how technology, no matter how effective, may not be able to cope with these kinds of cultural issues concerning hearing loss prevention. Educational strategies may represent a more effective solution to the real problem of changing people’s attitudes and motivations to want to protect their hearing.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135855412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-12DOI: 10.1007/s12193-023-00418-5
Xuan Liu, Jiachen Ma, Qiang Wang
{"title":"A social robot as your reading companion: exploring the relationships between gaze patterns and knowledge gains","authors":"Xuan Liu, Jiachen Ma, Qiang Wang","doi":"10.1007/s12193-023-00418-5","DOIUrl":"https://doi.org/10.1007/s12193-023-00418-5","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135967728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1007/s12193-023-00415-8
Jason Sterkenburg, Steven Landry, Shabnam FakhrHosseini, Myounghoon Jeon
{"title":"In-vehicle air gesture design: impacts of display modality and control orientation","authors":"Jason Sterkenburg, Steven Landry, Shabnam FakhrHosseini, Myounghoon Jeon","doi":"10.1007/s12193-023-00415-8","DOIUrl":"https://doi.org/10.1007/s12193-023-00415-8","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135487156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}