{"title":"Multimodal exploration in elementary music classroom","authors":"Martha Papadogianni, Ercan Altinsoy, Areti Andreopoulou","doi":"10.1007/s12193-023-00420-x","DOIUrl":"https://doi.org/10.1007/s12193-023-00420-x","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135824646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1007/s12193-023-00419-4
Luca Turchet, Simone Luiten, Tjebbe Treub, Marloes van der Burgt, Costanza Siani, Alberto Boem
Abstract Hearing loss is becoming a global problem, partly as a consequence of exposure to loud music. People may be unaware about the harmful sound levels and consequent damages caused by loud music at venues such as discotheques or festivals. Earplugs are effective in reducing the risk of noise-induced hearing loss but have been shown to be an insufficient prevention strategy. Thus, when it is not possible to lower the volume of the sound source, a viable solution to the problem is to relocate to quieter locations from time to time. In this context, this study introduces a bracelet device with the goal of warning users when the music sound level is too loud in their specific location, via haptic, visual or visuo-haptic feedback. The bracelet embeds a microphone, a microcontroller, an LED strip and four vibration motors. We performed a user study where thirteen participants were asked to react to the three kinds of feedback during a simulated disco club event where the volume of music pieces varied to reach a loud intensity. Results showed that participants never missed the above threshold notification via all types of feedback, but visual feedback led to the slowest reaction times and was deemed the least effective. In line with the findings reported in the hearing loss prevention literature, the perceived usefulness of the proposed device was highly dependent on participants’ subjective approach to the topic of hearing risks at loud music events as well as their willingness to take action regarding its prevention. Ultimately, our study shows how technology, no matter how effective, may not be able to cope with these kinds of cultural issues concerning hearing loss prevention. Educational strategies may represent a more effective solution to the real problem of changing people’s attitudes and motivations to want to protect their hearing.
{"title":"Hearing loss prevention at loud music events via real-time visuo-haptic feedback","authors":"Luca Turchet, Simone Luiten, Tjebbe Treub, Marloes van der Burgt, Costanza Siani, Alberto Boem","doi":"10.1007/s12193-023-00419-4","DOIUrl":"https://doi.org/10.1007/s12193-023-00419-4","url":null,"abstract":"Abstract Hearing loss is becoming a global problem, partly as a consequence of exposure to loud music. People may be unaware about the harmful sound levels and consequent damages caused by loud music at venues such as discotheques or festivals. Earplugs are effective in reducing the risk of noise-induced hearing loss but have been shown to be an insufficient prevention strategy. Thus, when it is not possible to lower the volume of the sound source, a viable solution to the problem is to relocate to quieter locations from time to time. In this context, this study introduces a bracelet device with the goal of warning users when the music sound level is too loud in their specific location, via haptic, visual or visuo-haptic feedback. The bracelet embeds a microphone, a microcontroller, an LED strip and four vibration motors. We performed a user study where thirteen participants were asked to react to the three kinds of feedback during a simulated disco club event where the volume of music pieces varied to reach a loud intensity. Results showed that participants never missed the above threshold notification via all types of feedback, but visual feedback led to the slowest reaction times and was deemed the least effective. In line with the findings reported in the hearing loss prevention literature, the perceived usefulness of the proposed device was highly dependent on participants’ subjective approach to the topic of hearing risks at loud music events as well as their willingness to take action regarding its prevention. Ultimately, our study shows how technology, no matter how effective, may not be able to cope with these kinds of cultural issues concerning hearing loss prevention. Educational strategies may represent a more effective solution to the real problem of changing people’s attitudes and motivations to want to protect their hearing.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135855412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-12DOI: 10.1007/s12193-023-00418-5
Xuan Liu, Jiachen Ma, Qiang Wang
{"title":"A social robot as your reading companion: exploring the relationships between gaze patterns and knowledge gains","authors":"Xuan Liu, Jiachen Ma, Qiang Wang","doi":"10.1007/s12193-023-00418-5","DOIUrl":"https://doi.org/10.1007/s12193-023-00418-5","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135967728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1007/s12193-023-00415-8
Jason Sterkenburg, Steven Landry, Shabnam FakhrHosseini, Myounghoon Jeon
{"title":"In-vehicle air gesture design: impacts of display modality and control orientation","authors":"Jason Sterkenburg, Steven Landry, Shabnam FakhrHosseini, Myounghoon Jeon","doi":"10.1007/s12193-023-00415-8","DOIUrl":"https://doi.org/10.1007/s12193-023-00415-8","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135487156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-12DOI: 10.1007/s12193-023-00411-y
Martin Dobiasch, Stefan Oppl, Michael Stöckl, Arnold Baca
Abstract Feedback is essential for athletes in order to improve their sport performance. Feedback systems try to provide athletes and coaches not only with visualisations of acquired data, but moreover, with insights into—possibly—invisible aspects of their performance. With the widespread adoption of smartphones and the increase in their capabilities, their use as a device for applications of feedback systems is becoming increasingly popular. However, developing mobile feedback systems requires a high level of expertise from researchers and practitioners. The Direct Mobile Coaching model is a design-paradigm for mobile feedback systems. In order to reduce programming efforts, PEGASOS, a framework for creating feedback systems implementing the so-called Direct Mobile Coaching model, is introduced. The paper compares this framework with state-of-the-art research with regard to their ability of providing different variants feedback and offering multimodality to users.
{"title":"Pegasos: a framework for the creation of direct mobile coaching feedback systems","authors":"Martin Dobiasch, Stefan Oppl, Michael Stöckl, Arnold Baca","doi":"10.1007/s12193-023-00411-y","DOIUrl":"https://doi.org/10.1007/s12193-023-00411-y","url":null,"abstract":"Abstract Feedback is essential for athletes in order to improve their sport performance. Feedback systems try to provide athletes and coaches not only with visualisations of acquired data, but moreover, with insights into—possibly—invisible aspects of their performance. With the widespread adoption of smartphones and the increase in their capabilities, their use as a device for applications of feedback systems is becoming increasingly popular. However, developing mobile feedback systems requires a high level of expertise from researchers and practitioners. The Direct Mobile Coaching model is a design-paradigm for mobile feedback systems. In order to reduce programming efforts, PEGASOS, a framework for creating feedback systems implementing the so-called Direct Mobile Coaching model, is introduced. The paper compares this framework with state-of-the-art research with regard to their ability of providing different variants feedback and offering multimodality to users.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135824683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-09DOI: 10.1007/s12193-023-00414-9
Adrian B. Latupeirissa, Roberto Bresin
Abstract This paper presents the design and development of PepperOSC, an interface that connects Pepper and NAO robots with sound production tools to enable the development of interactive sonification in human-robot interaction (HRI). The interface uses Open Sound Control (OSC) messages to stream kinematic data from robots to various sound design and music production tools. The goals of PepperOSC are twofold: (i) to provide a tool for HRI researchers in developing multimodal user interfaces through sonification, and (ii) to lower the barrier for sound designers to contribute to HRI. To demonstrate the potential use of PepperOSC, this paper also presents two applications we have conducted: (i) a course project by two master’s students who created a robot sound model in Pure Data, and (ii) a museum installation of Pepper robot, employing sound models developed by a sound designer and a composer/researcher in music technology using MaxMSP and SuperCollider respectively. Furthermore, we discuss the potential use cases of PepperOSC in social robotics and artistic contexts. These applications demonstrate the versatility of PepperOSC and its ability to explore diverse aesthetic strategies for robot movement sonification, offering a promising approach to enhance the effectiveness and appeal of human-robot interactions.
{"title":"PepperOSC: enabling interactive sonification of a robot’s expressive movement","authors":"Adrian B. Latupeirissa, Roberto Bresin","doi":"10.1007/s12193-023-00414-9","DOIUrl":"https://doi.org/10.1007/s12193-023-00414-9","url":null,"abstract":"Abstract This paper presents the design and development of PepperOSC, an interface that connects Pepper and NAO robots with sound production tools to enable the development of interactive sonification in human-robot interaction (HRI). The interface uses Open Sound Control (OSC) messages to stream kinematic data from robots to various sound design and music production tools. The goals of PepperOSC are twofold: (i) to provide a tool for HRI researchers in developing multimodal user interfaces through sonification, and (ii) to lower the barrier for sound designers to contribute to HRI. To demonstrate the potential use of PepperOSC, this paper also presents two applications we have conducted: (i) a course project by two master’s students who created a robot sound model in Pure Data, and (ii) a museum installation of Pepper robot, employing sound models developed by a sound designer and a composer/researcher in music technology using MaxMSP and SuperCollider respectively. Furthermore, we discuss the potential use cases of PepperOSC in social robotics and artistic contexts. These applications demonstrate the versatility of PepperOSC and its ability to explore diverse aesthetic strategies for robot movement sonification, offering a promising approach to enhance the effectiveness and appeal of human-robot interactions.","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136108559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-30DOI: 10.1007/s12193-023-00413-w
J. Fitzpatrick, Flaithrí Neff
{"title":"Perceptually congruent sonification of auditory line charts","authors":"J. Fitzpatrick, Flaithrí Neff","doi":"10.1007/s12193-023-00413-w","DOIUrl":"https://doi.org/10.1007/s12193-023-00413-w","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48937999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-28DOI: 10.1007/s12193-023-00412-x
Guoxuan Ning, Brianna Grant, B. Kapralos, A. Quevedo, Kc Collins, K. Kanev, A. Dubrowski
{"title":"Correction to: Understanding virtual drilling perception using sound, and kinesthetic cues obtained with a mouse and keyboard","authors":"Guoxuan Ning, Brianna Grant, B. Kapralos, A. Quevedo, Kc Collins, K. Kanev, A. Dubrowski","doi":"10.1007/s12193-023-00412-x","DOIUrl":"https://doi.org/10.1007/s12193-023-00412-x","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46417378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-18DOI: 10.1007/s12193-023-00409-6
Haram Choi, Joungheum Kwon, Sanghun Nam
{"title":"Research on the application of gaze visualization interface on virtual reality training systems","authors":"Haram Choi, Joungheum Kwon, Sanghun Nam","doi":"10.1007/s12193-023-00409-6","DOIUrl":"https://doi.org/10.1007/s12193-023-00409-6","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44814008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-14DOI: 10.1007/s12193-023-00410-z
Paula Castro Sánchez, Casey C. Bennett
{"title":"Facial expression recognition via transfer learning in cooperative game paradigms for enhanced social AI","authors":"Paula Castro Sánchez, Casey C. Bennett","doi":"10.1007/s12193-023-00410-z","DOIUrl":"https://doi.org/10.1007/s12193-023-00410-z","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45061503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}