Pub Date : 2021-07-27DOI: 10.1007/s12193-021-00378-8
Arthur Paté, Gaspard Farge, Benjamin K. Holtzman, Anna C. Barth, Piero Poli, Lapo Boschi, Leif Karlstrom
Data visualization, and to a lesser extent data sonification, are classic tools to the scientific community. However, these two approaches are very rarely combined, although they are highly complementary: our visual system is good at recognizing spatial patterns, whereas our auditory system is better tuned for temporal patterns. In this article, data representation methods are proposed that combine visualization, sonification, and spatial audio techniques, in order to optimize the user’s perception of spatial and temporal patterns in a single display, to increase the feeling of immersion, and to take advantage of multimodal integration mechanisms. Three seismic data sets are used to illustrate the methods, covering different physical phenomena, time scales, spatial distributions, and spatio-temporal dynamics. The methods are adapted to the specificities of each data set, and to the amount of information that the designer wants to display. This leads to further developments, namely the use of audification with two time scales, the switch from pure audification to time-modulated noise, and the switch from pure audification to sonic icons. First user feedback from live demonstrations indicates that the methods presented in this article seem to enhance the perception of spatio-temporal patterns, which is a key parameter to the understanding of seismically active systems, and a step towards apprehending the processes that drive this activity.
{"title":"Combining audio and visual displays to highlight temporal and spatial seismic patterns","authors":"Arthur Paté, Gaspard Farge, Benjamin K. Holtzman, Anna C. Barth, Piero Poli, Lapo Boschi, Leif Karlstrom","doi":"10.1007/s12193-021-00378-8","DOIUrl":"https://doi.org/10.1007/s12193-021-00378-8","url":null,"abstract":"<p>Data visualization, and to a lesser extent data sonification, are classic tools to the scientific community. However, these two approaches are very rarely combined, although they are highly complementary: our visual system is good at recognizing spatial patterns, whereas our auditory system is better tuned for temporal patterns. In this article, data representation methods are proposed that combine visualization, sonification, and spatial audio techniques, in order to optimize the user’s perception of spatial and temporal patterns in a single display, to increase the feeling of immersion, and to take advantage of multimodal integration mechanisms. Three seismic data sets are used to illustrate the methods, covering different physical phenomena, time scales, spatial distributions, and spatio-temporal dynamics. The methods are adapted to the specificities of each data set, and to the amount of information that the designer wants to display. This leads to further developments, namely the use of audification with two time scales, the switch from pure audification to time-modulated noise, and the switch from pure audification to sonic icons. First user feedback from live demonstrations indicates that the methods presented in this article seem to enhance the perception of spatio-temporal patterns, which is a key parameter to the understanding of seismically active systems, and a step towards apprehending the processes that drive this activity.\u0000</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-02DOI: 10.1007/s12193-021-00376-w
Giles Hamilton-Fletcher, James Alvarez, Marianna Obrist, Jamie Ward
{"title":"SoundSight: a mobile sensory substitution device that sonifies colour, distance, and temperature","authors":"Giles Hamilton-Fletcher, James Alvarez, Marianna Obrist, Jamie Ward","doi":"10.1007/s12193-021-00376-w","DOIUrl":"https://doi.org/10.1007/s12193-021-00376-w","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00376-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42978218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In automotive domain, operation of secondary tasks like accessing infotainment system, adjusting air conditioning vents, and side mirrors distract drivers from driving. Though existing modalities like gesture and speech recognition systems facilitate undertaking secondary tasks by reducing duration of eyes off the road, those often require remembering a set of gestures or screen sequences. In this paper, we have proposed two different modalities for drivers to virtually touch the dashboard display using a laser tracker with a mechanical switch and an eye gaze switch. We compared performances of our proposed modalities against conventional touch modality in automotive environment by comparing pointing and selection times of representative secondary task and also analysed effect on driving performance in terms of deviation from lane, average speed, variation in perceived workload and system usability. We did not find significant difference in driving and pointing performance between laser tracking system and existing touchscreen system. Our result also showed that the driving and pointing performance of the virtual touch system with eye gaze switch was significantly better than the same with mechanical switch. We evaluated the efficacy of the proposed virtual touch system with eye gaze switch inside a real car and investigated acceptance of the system by professional drivers using qualitative research. The quantitative and qualitative studies indicated importance of using multimodal system inside car and highlighted several criteria for acceptance of new automotive user interface.
{"title":"A wearable virtual touch system for IVIS in cars","authors":"Gowdham Prabhakar, Priyam Rajkhowa, Dharmesh Harsha, Pradipta Biswas","doi":"10.1007/s12193-021-00377-9","DOIUrl":"https://doi.org/10.1007/s12193-021-00377-9","url":null,"abstract":"<p>In automotive domain, operation of secondary tasks like accessing infotainment system, adjusting air conditioning vents, and side mirrors distract drivers from driving. Though existing modalities like gesture and speech recognition systems facilitate undertaking secondary tasks by reducing duration of eyes off the road, those often require remembering a set of gestures or screen sequences. In this paper, we have proposed two different modalities for drivers to virtually touch the dashboard display using a laser tracker with a mechanical switch and an eye gaze switch. We compared performances of our proposed modalities against conventional touch modality in automotive environment by comparing pointing and selection times of representative secondary task and also analysed effect on driving performance in terms of deviation from lane, average speed, variation in perceived workload and system usability. We did not find significant difference in driving and pointing performance between laser tracking system and existing touchscreen system. Our result also showed that the driving and pointing performance of the virtual touch system with eye gaze switch was significantly better than the same with mechanical switch. We evaluated the efficacy of the proposed virtual touch system with eye gaze switch inside a real car and investigated acceptance of the system by professional drivers using qualitative research. The quantitative and qualitative studies indicated importance of using multimodal system inside car and highlighted several criteria for acceptance of new automotive user interface.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-21DOI: 10.1007/s12193-021-00375-x
Isabelle Su, Ian Hattwick, Christine Southworth, Evan Ziporyn, Ally Bisshop, R. Mühlethaler, Tomás Saraceno, M. Buehler
{"title":"Interactive exploration of a hierarchical spider web structure with sound","authors":"Isabelle Su, Ian Hattwick, Christine Southworth, Evan Ziporyn, Ally Bisshop, R. Mühlethaler, Tomás Saraceno, M. Buehler","doi":"10.1007/s12193-021-00375-x","DOIUrl":"https://doi.org/10.1007/s12193-021-00375-x","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00375-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52688825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: A gaze-based interactive system to explore artwork imagery","authors":"Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe","doi":"10.1007/s12193-021-00374-y","DOIUrl":"https://doi.org/10.1007/s12193-021-00374-y","url":null,"abstract":"<p>A Correction to this paper has been published: https://doi.org/10.1007/s12193-021-00373-z</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-21DOI: 10.1007/s12193-021-00373-z
Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe
Interactive and immersive technologies can significantly enhance the fruition of museums and exhibits. Several studies have proved that multimedia installations can attract visitors, presenting cultural and scientific information in an appealing way. In this article, we present our workflow for achieving a gaze-based interaction with artwork imagery. We designed both a tool for creating interactive “gaze-aware” images and an eye tracking application conceived to interact with those images with the gaze. Users can display different pictures, perform pan and zoom operations, and search for regions of interest with associated multimedia content (text, image, audio, or video). Besides being an assistive technology for motor impaired people (like most gaze-based interaction applications), our solution can also be a valid alternative to the common touch screen panels present in museums, in accordance with the new safety guidelines imposed by the COVID-19 pandemic. Experiments carried out with a panel of volunteer testers have shown that the tool is usable, effective, and easy to learn.
{"title":"A gaze-based interactive system to explore artwork imagery","authors":"Piercarlo Dondi, Marco Porta, Angelo Donvito, Giovanni Volpe","doi":"10.1007/s12193-021-00373-z","DOIUrl":"https://doi.org/10.1007/s12193-021-00373-z","url":null,"abstract":"<p>Interactive and immersive technologies can significantly enhance the fruition of museums and exhibits. Several studies have proved that multimedia installations can attract visitors, presenting cultural and scientific information in an appealing way. In this article, we present our workflow for achieving a gaze-based interaction with artwork imagery. We designed both a tool for creating interactive “gaze-aware” images and an eye tracking application conceived to interact with those images with the gaze. Users can display different pictures, perform pan and zoom operations, and search for regions of interest with associated multimedia content (text, image, audio, or video). Besides being an assistive technology for motor impaired people (like most gaze-based interaction applications), our solution can also be a valid alternative to the common touch screen panels present in museums, in accordance with the new safety guidelines imposed by the COVID-19 pandemic. Experiments carried out with a panel of volunteer testers have shown that the tool is usable, effective, and easy to learn.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-24DOI: 10.1007/s12193-021-00366-y
Dimosthenis Kontogiorgos, Andre Pereira, Joakim Gustafson
Conversational interfaces that interact with humans need to continuously establish, maintain and repair common ground in task-oriented dialogues. Uncertainty, repairs and acknowledgements are expressed in user behaviour in the continuous efforts of the conversational partners to maintain mutual understanding. Users change their behaviour when interacting with systems in different forms of embodiment, which affects the abilities of these interfaces to observe users’ recurrent social signals. Additionally, humans are intellectually biased towards social activity when facing anthropomorphic agents or when presented with subtle social cues. Two studies are presented in this paper examining how humans interact in a referential communication task with wizarded interfaces in different forms of embodiment. In study 1 (N = 30), we test whether humans respond the same way to agents, in different forms of embodiment and social behaviour. In study 2 (N = 44), we replicate the same task and agents but introduce conversational failures disrupting the process of grounding. Findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with non-verbal cues, as human grounding behaviours change when embodiment and failures are manipulated.
{"title":"Grounding behaviours with conversational interfaces: effects of embodiment and failures","authors":"Dimosthenis Kontogiorgos, Andre Pereira, Joakim Gustafson","doi":"10.1007/s12193-021-00366-y","DOIUrl":"https://doi.org/10.1007/s12193-021-00366-y","url":null,"abstract":"<p>Conversational interfaces that interact with humans need to continuously establish, maintain and repair common ground in task-oriented dialogues. Uncertainty, repairs and acknowledgements are expressed in user behaviour in the continuous efforts of the conversational partners to maintain mutual understanding. Users change their behaviour when interacting with systems in different forms of embodiment, which affects the abilities of these interfaces to observe users’ recurrent social signals. Additionally, humans are intellectually biased towards social activity when facing anthropomorphic agents or when presented with subtle social cues. Two studies are presented in this paper examining how humans interact in a referential communication task with wizarded interfaces in different forms of embodiment. In study 1 (N = 30), we test whether humans respond the same way to agents, in different forms of embodiment and social behaviour. In study 2 (N = 44), we replicate the same task and agents but introduce conversational failures disrupting the process of grounding. Findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with non-verbal cues, as human grounding behaviours change when embodiment and failures are manipulated.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-19DOI: 10.1007/s12193-021-00370-2
Walid Merrad, A. Héloir, C. Kolski, Antonio Krüger
{"title":"RFID-based tangible and touch tabletop for dual reality in crisis management context","authors":"Walid Merrad, A. Héloir, C. Kolski, Antonio Krüger","doi":"10.1007/s12193-021-00370-2","DOIUrl":"https://doi.org/10.1007/s12193-021-00370-2","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00370-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52688649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-16DOI: 10.1007/s12193-021-00372-0
Hamdi Dibeklioğlu, Elif Surer, A. A. Salah, T. Dutoit
{"title":"Behavior and usability analysis for multimodal user interfaces","authors":"Hamdi Dibeklioğlu, Elif Surer, A. A. Salah, T. Dutoit","doi":"10.1007/s12193-021-00372-0","DOIUrl":"https://doi.org/10.1007/s12193-021-00372-0","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00372-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43329247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1007/s12193-021-00365-z
Brianna J. Tomlinson, B. Walker, Emily B. Moore
{"title":"Identifying and evaluating conceptual representations for auditory-enhanced interactive physics simulations","authors":"Brianna J. Tomlinson, B. Walker, Emily B. Moore","doi":"10.1007/s12193-021-00365-z","DOIUrl":"https://doi.org/10.1007/s12193-021-00365-z","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-021-00365-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42301305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}