Pub Date : 2022-09-23DOI: 10.1007/s12193-022-00396-0
Brandon Yam-Viramontes, Héctor Cardona-Reyes, J. Gonzalez-Trejo, Cristian Trujillo-Espinoza, D. Mercado-Ravell
{"title":"Commanding a drone through body poses, improving the user experience","authors":"Brandon Yam-Viramontes, Héctor Cardona-Reyes, J. Gonzalez-Trejo, Cristian Trujillo-Espinoza, D. Mercado-Ravell","doi":"10.1007/s12193-022-00396-0","DOIUrl":"https://doi.org/10.1007/s12193-022-00396-0","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"16 1","pages":"357 - 369"},"PeriodicalIF":2.9,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41698688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1007/s12193-022-00393-3
Yongmeng Wu, N. Bryan-Kinns, Jinyi Zhi
{"title":"Exploring visual stimuli as a support for novices’ creative engagement with digital musical interfaces","authors":"Yongmeng Wu, N. Bryan-Kinns, Jinyi Zhi","doi":"10.1007/s12193-022-00393-3","DOIUrl":"https://doi.org/10.1007/s12193-022-00393-3","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"16 1","pages":"343 - 356"},"PeriodicalIF":2.9,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48520678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-23DOI: 10.1007/s12193-022-00391-5
Riccardo Galdieri, Cristian Camardella, M. Carrozzino, A. Frisoli
{"title":"Designing multi-purpose devices to enhance users’ perception of haptics","authors":"Riccardo Galdieri, Cristian Camardella, M. Carrozzino, A. Frisoli","doi":"10.1007/s12193-022-00391-5","DOIUrl":"https://doi.org/10.1007/s12193-022-00391-5","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"16 1","pages":"335 - 342"},"PeriodicalIF":2.9,"publicationDate":"2022-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44114951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-18DOI: 10.1007/s12193-022-00392-4
M. Juan, M. Méndez-López, C. Fidalgo, R. Mollá, R. Vivó, David Paramo
{"title":"A SLAM-based augmented reality app for the assessment of spatial short-term memory using visual and auditory stimuli","authors":"M. Juan, M. Méndez-López, C. Fidalgo, R. Mollá, R. Vivó, David Paramo","doi":"10.1007/s12193-022-00392-4","DOIUrl":"https://doi.org/10.1007/s12193-022-00392-4","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"16 1","pages":"319 - 333"},"PeriodicalIF":2.9,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42386870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-25DOI: 10.1007/s12193-022-00390-6
J. D. de Winter, Jimmy Hu, Bastiaan Petermeijer
{"title":"Ipsilateral and contralateral warnings: effects on decision-making and eye movements in near-collision scenarios","authors":"J. D. de Winter, Jimmy Hu, Bastiaan Petermeijer","doi":"10.1007/s12193-022-00390-6","DOIUrl":"https://doi.org/10.1007/s12193-022-00390-6","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"16 1","pages":"303 - 317"},"PeriodicalIF":2.9,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46948671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-05DOI: 10.1007/s12193-022-00389-z
Sara Falcone, Jan Kolkmeier, Merijn Bruijnes, D. Heylen
{"title":"The multimodal EchoBorg: not as smart as it looks","authors":"Sara Falcone, Jan Kolkmeier, Merijn Bruijnes, D. Heylen","doi":"10.1007/s12193-022-00389-z","DOIUrl":"https://doi.org/10.1007/s12193-022-00389-z","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"16 1","pages":"293 - 302"},"PeriodicalIF":2.9,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46998667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-29DOI: 10.1007/s12193-022-00388-0
Sara Vlahovic, M. Sužnjević, L. Skorin-Kapov
{"title":"A survey of challenges and methods for Quality of Experience assessment of interactive VR applications","authors":"Sara Vlahovic, M. Sužnjević, L. Skorin-Kapov","doi":"10.1007/s12193-022-00388-0","DOIUrl":"https://doi.org/10.1007/s12193-022-00388-0","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"16 1","pages":"257 - 291"},"PeriodicalIF":2.9,"publicationDate":"2022-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48176686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-12DOI: 10.1007/s12193-022-00387-1
Weidong Huang, Mathew Wakefield, Troels Ammitsbøl Rasmussen, Seungwon Kim, Mark Billinghurst
Remote guidance on physical tasks is a type of collaboration in which a local worker is guided by a remote helper to operate on a set of physical objects. It has many applications in industrial sections such as remote maintenance and how to support this type of remote collaboration has been researched for almost three decades. Although a range of different modern computing tools and systems have been proposed, developed and used to support remote guidance in different application scenarios, it is essential to provide communication cues in a shared visual space to achieve common ground for effective communication and collaboration. In this paper, we conduct a selective review to summarize communication cues, approaches that implement the cues and their effects on augmented reality based remote guidance. We also discuss challenges and propose possible future research and development directions.
{"title":"A review on communication cues for augmented reality based remote guidance","authors":"Weidong Huang, Mathew Wakefield, Troels Ammitsbøl Rasmussen, Seungwon Kim, Mark Billinghurst","doi":"10.1007/s12193-022-00387-1","DOIUrl":"https://doi.org/10.1007/s12193-022-00387-1","url":null,"abstract":"<p>Remote guidance on physical tasks is a type of collaboration in which a local worker is guided by a remote helper to operate on a set of physical objects. It has many applications in industrial sections such as remote maintenance and how to support this type of remote collaboration has been researched for almost three decades. Although a range of different modern computing tools and systems have been proposed, developed and used to support remote guidance in different application scenarios, it is essential to provide communication cues in a shared visual space to achieve common ground for effective communication and collaboration. In this paper, we conduct a selective review to summarize communication cues, approaches that implement the cues and their effects on augmented reality based remote guidance. We also discuss challenges and propose possible future research and development directions.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"40 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-09DOI: 10.1007/s12193-021-00386-8
Vicent Girbés-Juan, Vinicius Schettino, Luis Gracia, J. Ernesto Solanes, Yiannis Demiris, Josep Tornero
High dexterity is required in tasks in which there is contact between objects, such as surface conditioning (wiping, polishing, scuffing, sanding, etc.), specially when the location of the objects involved is unknown or highly inaccurate because they are moving, like a car body in automotive industry lines. These applications require the human adaptability and the robot accuracy. However, sharing the same workspace is not possible in most cases due to safety issues. Hence, a multi-modal teleoperation system combining haptics and an inertial motion capture system is introduced in this work. The human operator gets the sense of touch thanks to haptic feedback, whereas using the motion capture device allows more naturalistic movements. Visual feedback assistance is also introduced to enhance immersion. A Baxter dual-arm robot is used to offer more flexibility and manoeuvrability, allowing to perform two independent operations simultaneously. Several tests have been carried out to assess the proposed system. As it is shown by the experimental results, the task duration is reduced and the overall performance improves thanks to the proposed teleoperation method.
{"title":"Combining haptics and inertial motion capture to enhance remote control of a dual-arm robot","authors":"Vicent Girbés-Juan, Vinicius Schettino, Luis Gracia, J. Ernesto Solanes, Yiannis Demiris, Josep Tornero","doi":"10.1007/s12193-021-00386-8","DOIUrl":"https://doi.org/10.1007/s12193-021-00386-8","url":null,"abstract":"<p>High dexterity is required in tasks in which there is contact between objects, such as surface conditioning (wiping, polishing, scuffing, sanding, etc.), specially when the location of the objects involved is unknown or highly inaccurate because they are moving, like a car body in automotive industry lines. These applications require the human adaptability and the robot accuracy. However, sharing the same workspace is not possible in most cases due to safety issues. Hence, a multi-modal teleoperation system combining haptics and an inertial motion capture system is introduced in this work. The human operator gets the sense of touch thanks to haptic feedback, whereas using the motion capture device allows more naturalistic movements. Visual feedback assistance is also introduced to enhance immersion. A Baxter dual-arm robot is used to offer more flexibility and manoeuvrability, allowing to perform two independent operations simultaneously. Several tests have been carried out to assess the proposed system. As it is shown by the experimental results, the task duration is reduced and the overall performance improves thanks to the proposed teleoperation method.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"30 2-3","pages":""},"PeriodicalIF":2.9,"publicationDate":"2022-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-24DOI: 10.1007/s12193-021-00383-x
Walter Setti, Isaac Alonso-Martinez Engel, Luigi F. Cuturi, Monica Gori, Lorenzo Picinali
Spatial memory is a cognitive skill that allows the recall of information about the space, its layout, and items’ locations. We present a novel application built around 3D spatial audio technology to evaluate audio-spatial memory abilities. The sound sources have been spatially distributed employing the 3D Tune-In Toolkit, a virtual acoustic simulator. The participants are presented with sequences of sounds of increasing length emitted from virtual auditory sources around their heads. To identify stimuli positions and register the test responses, we designed a custom-made interface with buttons arranged according to sound locations. We took inspiration from the Corsi-Block test for the experimental procedure, a validated clinical approach for assessing visuo-spatial memory abilities. In two different experimental sessions, the participants were tested with the classical Corsi-Block and, blindfolded, with the proposed task, named Audio-Corsi for brevity. Our results show comparable performance across the two tests in terms of the estimated memory parameter precision. Furthermore, in the Audio-Corsi we observe a lower span compared to the Corsi-Block test. We discuss these results in the context of the theoretical relationship between the auditory and visual sensory modalities and potential applications of this system in multiple scientific and clinical contexts.
{"title":"The Audio-Corsi: an acoustic virtual reality-based technological solution for evaluating audio-spatial memory abilities","authors":"Walter Setti, Isaac Alonso-Martinez Engel, Luigi F. Cuturi, Monica Gori, Lorenzo Picinali","doi":"10.1007/s12193-021-00383-x","DOIUrl":"https://doi.org/10.1007/s12193-021-00383-x","url":null,"abstract":"<p>Spatial memory is a cognitive skill that allows the recall of information about the space, its layout, and items’ locations. We present a novel application built around 3D spatial audio technology to evaluate audio-spatial memory abilities. The sound sources have been spatially distributed employing the 3D Tune-In Toolkit, a virtual acoustic simulator. The participants are presented with sequences of sounds of increasing length emitted from virtual auditory sources around their heads. To identify stimuli positions and register the test responses, we designed a custom-made interface with buttons arranged according to sound locations. We took inspiration from the <i>Corsi-Block</i> test for the experimental procedure, a validated clinical approach for assessing visuo-spatial memory abilities. In two different experimental sessions, the participants were tested with the classical <i>Corsi-Block</i> and, blindfolded, with the proposed task, named <i>Audio-Corsi</i> for brevity. Our results show comparable performance across the two tests in terms of the estimated memory parameter precision. Furthermore, in the <i>Audio-Corsi</i> we observe a lower span compared to the <i>Corsi-Block</i> test. We discuss these results in the context of the theoretical relationship between the auditory and visual sensory modalities and potential applications of this system in multiple scientific and clinical contexts.\u0000</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"36 3","pages":""},"PeriodicalIF":2.9,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}