A. Ruis, Carol Barford, Jais Brohinsky, Yuanru Tan, Matthew Bougie, Zhiqiang Cai, Tyler J. Lark, David Williamson Shaffer
To help young people understand socio-environmental systems and develop the confidence that meaningful action can be taken to address socio-environmental problems, young people need interactive simulations that enable them to take consequential actions in a familiar context and see the results. This can be achieved through reduced-form models with appropriate user interfaces, but it is a significant challenge to construct a system capable of producing educational models of socio-environmental systems that are localizable and customizable but accessible to educators and learners. In this paper, we present iPlan, a free, online educational software application designed to enable educators and middle- and high-school-aged learners to create custom, localized land-use simulations that can be used to frame, explore, and address complex land-use problems. We describe in detail the software application and its underlying computational models, and we present robust evidence that the accuracy of iPlan simulations is appropriate for educational contexts and preliminary evidence that educators are able to produce simulations suitable for their pedagogical goals and learner populations.
{"title":"iPlan: A Platform for Constructing Localized, Reduced-Form Models of Land-Use Impacts","authors":"A. Ruis, Carol Barford, Jais Brohinsky, Yuanru Tan, Matthew Bougie, Zhiqiang Cai, Tyler J. Lark, David Williamson Shaffer","doi":"10.3390/mti8040030","DOIUrl":"https://doi.org/10.3390/mti8040030","url":null,"abstract":"To help young people understand socio-environmental systems and develop the confidence that meaningful action can be taken to address socio-environmental problems, young people need interactive simulations that enable them to take consequential actions in a familiar context and see the results. This can be achieved through reduced-form models with appropriate user interfaces, but it is a significant challenge to construct a system capable of producing educational models of socio-environmental systems that are localizable and customizable but accessible to educators and learners. In this paper, we present iPlan, a free, online educational software application designed to enable educators and middle- and high-school-aged learners to create custom, localized land-use simulations that can be used to frame, explore, and address complex land-use problems. We describe in detail the software application and its underlying computational models, and we present robust evidence that the accuracy of iPlan simulations is appropriate for educational contexts and preliminary evidence that educators are able to produce simulations suitable for their pedagogical goals and learner populations.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140718785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As data-driven models gain importance in driving decisions and processes, recently, it has become increasingly important to visualize the data with both speed and accuracy. A massive volume of data is presently generated in the educational sphere from various learning platforms, tools, and institutions. The visual analytics of educational big data has the capability to improve student learning, develop strategies for personalized learning, and improve faculty productivity. However, there are limited advancements in the education domain for data-driven decision making leveraging the recent advancements in the field of machine learning. Some of the recent tools such as Tableau, Power BI, Microsoft Azure suite, Sisense, etc., leverage artificial intelligence and machine learning techniques to visualize data and generate insights from them; however, their applicability in educational advances is limited. This paper focuses on leveraging machine learning and visualization techniques to demonstrate their utility through a practical implementation using K-12 state assessment data compiled from the institutional websites of the States of Texas and Louisiana. Effective modeling and predictive analytics are the focus of the sample use case presented in this research. Our approach demonstrates the applicability of web technology in conjunction with machine learning to provide a cost-effective and timely solution to visualize and analyze big educational data. Additionally, ad hoc visualization provides contextual analysis in areas of concern for education agencies (EAs).
{"title":"Leveraging Visualization and Machine Learning Techniques in Education: A Case Study of K-12 State Assessment Data","authors":"Loni Taylor, Vibhuti Gupta, Kwanghee Jung","doi":"10.3390/mti8040028","DOIUrl":"https://doi.org/10.3390/mti8040028","url":null,"abstract":"As data-driven models gain importance in driving decisions and processes, recently, it has become increasingly important to visualize the data with both speed and accuracy. A massive volume of data is presently generated in the educational sphere from various learning platforms, tools, and institutions. The visual analytics of educational big data has the capability to improve student learning, develop strategies for personalized learning, and improve faculty productivity. However, there are limited advancements in the education domain for data-driven decision making leveraging the recent advancements in the field of machine learning. Some of the recent tools such as Tableau, Power BI, Microsoft Azure suite, Sisense, etc., leverage artificial intelligence and machine learning techniques to visualize data and generate insights from them; however, their applicability in educational advances is limited. This paper focuses on leveraging machine learning and visualization techniques to demonstrate their utility through a practical implementation using K-12 state assessment data compiled from the institutional websites of the States of Texas and Louisiana. Effective modeling and predictive analytics are the focus of the sample use case presented in this research. Our approach demonstrates the applicability of web technology in conjunction with machine learning to provide a cost-effective and timely solution to visualize and analyze big educational data. Additionally, ad hoc visualization provides contextual analysis in areas of concern for education agencies (EAs).","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140731848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates the integration of immersive technologies, primarily virtual reality (VR), in the domain of karate training and practice. The scoping review adheres to PRISMA guidelines and encompasses an extensive search across IEEE Xplore, Web of Science, and Scopus databases, yielding a total of 165 articles, from which 7 were ultimately included based on strict inclusion and exclusion criteria. The selected studies consistently highlight the dominance of VR technology in karate practice and teaching, with VR often facilitated by head-mounted displays (HMDs). The main purpose of VR is to create life-like training environments, evaluate performance, and enhance skill development. Immersive technologies, particularly VR, offer accurate motion capture and recording capabilities that deliver detailed feedback on technique, reaction time, and decision-making. This precision empowers athletes and coaches to identify areas for improvement and make data-driven training adjustments. Despite the promise of immersive technologies, established frameworks or guidelines are absent for their effective application in karate training. As a result, this suggests a need for best practices and guidelines to ensure optimal integration.
{"title":"The Use of Immersive Technologies in Karate Training: A Scoping Review","authors":"Dimosthenis Lygouras, A. Tsinakos","doi":"10.3390/mti8040027","DOIUrl":"https://doi.org/10.3390/mti8040027","url":null,"abstract":"This study investigates the integration of immersive technologies, primarily virtual reality (VR), in the domain of karate training and practice. The scoping review adheres to PRISMA guidelines and encompasses an extensive search across IEEE Xplore, Web of Science, and Scopus databases, yielding a total of 165 articles, from which 7 were ultimately included based on strict inclusion and exclusion criteria. The selected studies consistently highlight the dominance of VR technology in karate practice and teaching, with VR often facilitated by head-mounted displays (HMDs). The main purpose of VR is to create life-like training environments, evaluate performance, and enhance skill development. Immersive technologies, particularly VR, offer accurate motion capture and recording capabilities that deliver detailed feedback on technique, reaction time, and decision-making. This precision empowers athletes and coaches to identify areas for improvement and make data-driven training adjustments. Despite the promise of immersive technologies, established frameworks or guidelines are absent for their effective application in karate training. As a result, this suggests a need for best practices and guidelines to ensure optimal integration.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140764111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today, reinforcement learning is one of the most effective machine learning approaches in the tasks of automatically adapting computer systems to user needs. However, implementing this technology into a digital product requires addressing a key challenge: determining the reward model in the digital environment. This paper proposes a usability reward model in multi-agent reinforcement learning. Well-known mathematical formulas used for measuring usability metrics were analyzed in detail and incorporated into the usability reward model. In the usability reward model, any neural network-based multi-agent reinforcement learning algorithm can be used as the underlying learning algorithm. This paper presents a study using independent and actor-critic reinforcement learning algorithms to investigate their impact on the usability metrics of a mobile user interface. Computational experiments and usability tests were conducted in a specially designed multi-agent environment for mobile user interfaces, enabling the implementation of various usage scenarios and real-time adaptations.
{"title":"Mobile User Interface Adaptation Based on Usability Reward Model and Multi-Agent Reinforcement Learning","authors":"Dmitry Vidmanov, Alexander Alfimtsev","doi":"10.3390/mti8040026","DOIUrl":"https://doi.org/10.3390/mti8040026","url":null,"abstract":"Today, reinforcement learning is one of the most effective machine learning approaches in the tasks of automatically adapting computer systems to user needs. However, implementing this technology into a digital product requires addressing a key challenge: determining the reward model in the digital environment. This paper proposes a usability reward model in multi-agent reinforcement learning. Well-known mathematical formulas used for measuring usability metrics were analyzed in detail and incorporated into the usability reward model. In the usability reward model, any neural network-based multi-agent reinforcement learning algorithm can be used as the underlying learning algorithm. This paper presents a study using independent and actor-critic reinforcement learning algorithms to investigate their impact on the usability metrics of a mobile user interface. Computational experiments and usability tests were conducted in a specially designed multi-agent environment for mobile user interfaces, enabling the implementation of various usage scenarios and real-time adaptations.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140385871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathieu Raynal, Julie Ducasse, M. Macé, Bernard Oriola, Christophe Jouffrais
Over the last decade, several projects have demonstrated how interactive tactile graphics and tangible interfaces can improve and enrich access to information for people with vision impairments. While the former can be used to display a relatively large amount of information, they cannot be physically updated, which constrains the type of tasks that they can support. On the other hand, tangible interfaces are particularly suited for the (re)construction and manipulation of graphics, but the use of physical objects also restricts the type and amount of information that they can convey. We propose to bridge the gap between these two approaches by investigating the potential of tactile and tangible graphics for people with vision impairments. Working closely with special education teachers, we designed and developed the FlexiBoard, an affordable and portable system that enhances traditional tactile graphics with tangible interaction. In this paper, we report on the successive design steps that enabled us to identify and consider technical and design requirements. We thereafter explore two domains of application for the FlexiBoard: education and board games. Firstly, we report on one brainstorming session that we organized with four teachers in order to explore the application space of tangible and tactile graphics for educational activities. Secondly, we describe how the FlexiBoard enabled the successful adaptation of one visual board game into a multimodal accessible game that supports collaboration between sighted, low-vision and blind players.
{"title":"The FlexiBoard: Tangible and Tactile Graphics for People with Vision Impairments","authors":"Mathieu Raynal, Julie Ducasse, M. Macé, Bernard Oriola, Christophe Jouffrais","doi":"10.3390/mti8030017","DOIUrl":"https://doi.org/10.3390/mti8030017","url":null,"abstract":"Over the last decade, several projects have demonstrated how interactive tactile graphics and tangible interfaces can improve and enrich access to information for people with vision impairments. While the former can be used to display a relatively large amount of information, they cannot be physically updated, which constrains the type of tasks that they can support. On the other hand, tangible interfaces are particularly suited for the (re)construction and manipulation of graphics, but the use of physical objects also restricts the type and amount of information that they can convey. We propose to bridge the gap between these two approaches by investigating the potential of tactile and tangible graphics for people with vision impairments. Working closely with special education teachers, we designed and developed the FlexiBoard, an affordable and portable system that enhances traditional tactile graphics with tangible interaction. In this paper, we report on the successive design steps that enabled us to identify and consider technical and design requirements. We thereafter explore two domains of application for the FlexiBoard: education and board games. Firstly, we report on one brainstorming session that we organized with four teachers in order to explore the application space of tangible and tactile graphics for educational activities. Secondly, we describe how the FlexiBoard enabled the successful adaptation of one visual board game into a multimodal accessible game that supports collaboration between sighted, low-vision and blind players.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140426436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Peintner, Bengt Escher, Henrik Detjen, Carina Manger, A. Riener
Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance and user experience. Yet, the application of these concepts in real driving situations remains unclear, and a holistic approach to driving cooperation is missing. Existing research has primarily focused on testing specific interaction scenarios and implementations. To address this gap and offer a contemporary perspective on designing human–vehicle cooperation in automated driving, we have developed a three-part taxonomy with the help of an extensive literature review. The taxonomy broadens the notion of driving cooperation towards a holistic and application-oriented view by encompassing (1) the “Cooperation Use Case”, (2) the “Cooperation Frame”, and (3) the “Human–Machine Interface”. We validate the taxonomy by categorizing related literature and providing a detailed analysis of an exemplar paper. The proposed taxonomy offers designers and researchers a concise overview of the current state of driver cooperation and insights for future work. Further, the taxonomy can guide automotive HMI designers in ideation, communication, comparison, and reflection of cooperative driving interfaces.
{"title":"How to Design Human-Vehicle Cooperation for Automated Driving: A Review of Use Cases, Concepts, and Interfaces","authors":"J. Peintner, Bengt Escher, Henrik Detjen, Carina Manger, A. Riener","doi":"10.3390/mti8030016","DOIUrl":"https://doi.org/10.3390/mti8030016","url":null,"abstract":"Currently, a significant gap exists between academic and industrial research in automated driving development. Despite this, there is common sense that cooperative control approaches in automated vehicles will surpass the previously favored takeover paradigm in most driving situations due to enhanced driving performance and user experience. Yet, the application of these concepts in real driving situations remains unclear, and a holistic approach to driving cooperation is missing. Existing research has primarily focused on testing specific interaction scenarios and implementations. To address this gap and offer a contemporary perspective on designing human–vehicle cooperation in automated driving, we have developed a three-part taxonomy with the help of an extensive literature review. The taxonomy broadens the notion of driving cooperation towards a holistic and application-oriented view by encompassing (1) the “Cooperation Use Case”, (2) the “Cooperation Frame”, and (3) the “Human–Machine Interface”. We validate the taxonomy by categorizing related literature and providing a detailed analysis of an exemplar paper. The proposed taxonomy offers designers and researchers a concise overview of the current state of driver cooperation and insights for future work. Further, the taxonomy can guide automotive HMI designers in ideation, communication, comparison, and reflection of cooperative driving interfaces.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140429154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bram van Deurzen, Gustavo Alberto Rovelo Ruiz, Daniël M. Bot, D. Vanacken, Kris Luyten
Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics.
{"title":"Substitute Buttons: Exploring Tactile Perception of Physical Buttons for Use as Haptic Proxies","authors":"Bram van Deurzen, Gustavo Alberto Rovelo Ruiz, Daniël M. Bot, D. Vanacken, Kris Luyten","doi":"10.3390/mti8030015","DOIUrl":"https://doi.org/10.3390/mti8030015","url":null,"abstract":"Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140448199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing studies of technology supporting meditation habit formation mainly focus on mobile applications which support users via reminders. A potentially more effective source of motivation could be contextual cues provided by meaningful objects in meaningful locations. This longitudinal mixed-methods 8-week study explored the effectiveness of such an object, Prana, in supporting forming meditation habits among seven novice meditators. First, the Meditation Intentions Questionnaire-24 and the Determinants of Meditation Practice Inventory-Revised were administered. The self-report habit index (SrHI) was administered before and after the study. Prana recorded meditation session times, while daily diaries captured subjective experiences. At the end of the study, the system usability scale, the ten-item personality inventory, and the brief self-control scale were completed, followed by individual semi-structured interviews. We expected to find an increase in meditation frequency and temporal consistency, but the results failed to confirm this. Participants meditated for between 16% and 84% of the study. The frequency decreased with time for four, decreased with subsequent increase for two, and remained stable for one of them. Daily meditation experiences were positive, and the perceived difficulty to start meditating was low. No relevant correlation was found between the perceived difficulty in starting to meditate and meditation experience overall; the latter was only weakly associated with the likelihood of meditating the next day. While meditation became more habitual for six participants, positive scores on SrHI were rare. Despite the inconclusive results, this study provides valuable insights into challenges and benefits of using a meditation device, as well as potential methodological difficulties in studying habit formation with physical devices.
{"title":"Technology and Meditation: Exploring the Challenges and Benefits of a Physical Device to Support Meditation Routine","authors":"Tjaša Kermavnar, Pieter M. A. Desmet","doi":"10.3390/mti8020009","DOIUrl":"https://doi.org/10.3390/mti8020009","url":null,"abstract":"Existing studies of technology supporting meditation habit formation mainly focus on mobile applications which support users via reminders. A potentially more effective source of motivation could be contextual cues provided by meaningful objects in meaningful locations. This longitudinal mixed-methods 8-week study explored the effectiveness of such an object, Prana, in supporting forming meditation habits among seven novice meditators. First, the Meditation Intentions Questionnaire-24 and the Determinants of Meditation Practice Inventory-Revised were administered. The self-report habit index (SrHI) was administered before and after the study. Prana recorded meditation session times, while daily diaries captured subjective experiences. At the end of the study, the system usability scale, the ten-item personality inventory, and the brief self-control scale were completed, followed by individual semi-structured interviews. We expected to find an increase in meditation frequency and temporal consistency, but the results failed to confirm this. Participants meditated for between 16% and 84% of the study. The frequency decreased with time for four, decreased with subsequent increase for two, and remained stable for one of them. Daily meditation experiences were positive, and the perceived difficulty to start meditating was low. No relevant correlation was found between the perceived difficulty in starting to meditate and meditation experience overall; the latter was only weakly associated with the likelihood of meditating the next day. While meditation became more habitual for six participants, positive scores on SrHI were rare. Despite the inconclusive results, this study provides valuable insights into challenges and benefits of using a meditation device, as well as potential methodological difficulties in studying habit formation with physical devices.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140486153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vincenzo Ferrari, N. Cattari, S. Condino, F. Cutolo
Head-mounted displays (HMDs) are hands-free devices particularly useful for guiding near-field tasks such as manual surgical procedures. See-through HMDs do not significantly alter the user’s direct view of the world, but the optical merging of real and virtual information can hinder their coherent and simultaneous perception. In particular, the coherence between the real and virtual content is affected by a viewpoint parallax-related misalignment, which is due to the inaccessibility of the user-perceived reality through the semi-transparent optical combiner of the OST Optical See-Through (OST) display. Recent works demonstrated that a proper selection of the collimation optics of the HMD significantly mitigates the parallax-related registration error without the need for any eye-tracking cameras and/or for any error-prone alignment-based display calibration procedures. These solutions are either based on HMDs that projects the virtual imaging plane directly at arm’s distance, or they require the integration on the HMD of additional lenses to optically move the image of the observed scene to the virtual projection plane of the HMD. This paper describes and evaluates the pros and cons of both the suggested solutions by providing an analytical estimation of the residual registration error achieved with both solutions and discussing the perceptual issues generated by the simultaneous focalization of real and virtual information.
{"title":"Optical Rules to Mitigate the Parallax-Related Registration Error in See-Through Head-Mounted Displays for the Guidance of Manual Tasks","authors":"Vincenzo Ferrari, N. Cattari, S. Condino, F. Cutolo","doi":"10.3390/mti8010004","DOIUrl":"https://doi.org/10.3390/mti8010004","url":null,"abstract":"Head-mounted displays (HMDs) are hands-free devices particularly useful for guiding near-field tasks such as manual surgical procedures. See-through HMDs do not significantly alter the user’s direct view of the world, but the optical merging of real and virtual information can hinder their coherent and simultaneous perception. In particular, the coherence between the real and virtual content is affected by a viewpoint parallax-related misalignment, which is due to the inaccessibility of the user-perceived reality through the semi-transparent optical combiner of the OST Optical See-Through (OST) display. Recent works demonstrated that a proper selection of the collimation optics of the HMD significantly mitigates the parallax-related registration error without the need for any eye-tracking cameras and/or for any error-prone alignment-based display calibration procedures. These solutions are either based on HMDs that projects the virtual imaging plane directly at arm’s distance, or they require the integration on the HMD of additional lenses to optically move the image of the observed scene to the virtual projection plane of the HMD. This paper describes and evaluates the pros and cons of both the suggested solutions by providing an analytical estimation of the residual registration error achieved with both solutions and discussing the perceptual issues generated by the simultaneous focalization of real and virtual information.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139385512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Malegiannaki, Evangelia Garefalaki, Nikolaos Pellas, M. Kosmidis
Early detection is crucial for addressing attention deficits commonly associated with Traumatic brain injury (TBI), informing effective rehabilitation planning and intervention. While traditional neuropsychological assessments have been conventionally used to evaluate attention deficits, their limited ecological validity presents notable challenges. This study explores the efficacy and validity of a novel virtual reality test, the Computerized Battery for the Assessment of Attention Disorders (CBAAD), among a cohort of TBI survivors (n = 20), in comparison to a healthy control group (n = 20). Participants, ranging in age from 21 to 62 years, were administered a comprehensive neuropsychological assessment, including the CBAAD and the Attention Related Cognitive Errors Scale. While variations in attentional performance were observed across age cohorts, the study found no statistically significant age-related effects within either group. The CBAAD demonstrated sensitivity to attentional dysfunction in the TBI group, establishing its value as a comprehensive test battery for assessing attention in this specific population. Regression analyses demonstrated the CBAAD’s effectiveness in predicting real-life attentional errors reported by TBI patients. In summary, the CBAAD demonstrates sensitivity to attentional dysfunction in TBI patients and the ability to predict real-world attentional errors, establishing its value as a comprehensive test battery for assessing attention in this specific population. Its implementation holds promise for enhancing the early identification of attentional impairments and facilitating tailored rehabilitation strategies for TBI patients.
{"title":"Virtual Reality Assessment of Attention Deficits in Traumatic Brain Injury: Effectiveness and Ecological Validity","authors":"A. Malegiannaki, Evangelia Garefalaki, Nikolaos Pellas, M. Kosmidis","doi":"10.3390/mti8010003","DOIUrl":"https://doi.org/10.3390/mti8010003","url":null,"abstract":"Early detection is crucial for addressing attention deficits commonly associated with Traumatic brain injury (TBI), informing effective rehabilitation planning and intervention. While traditional neuropsychological assessments have been conventionally used to evaluate attention deficits, their limited ecological validity presents notable challenges. This study explores the efficacy and validity of a novel virtual reality test, the Computerized Battery for the Assessment of Attention Disorders (CBAAD), among a cohort of TBI survivors (n = 20), in comparison to a healthy control group (n = 20). Participants, ranging in age from 21 to 62 years, were administered a comprehensive neuropsychological assessment, including the CBAAD and the Attention Related Cognitive Errors Scale. While variations in attentional performance were observed across age cohorts, the study found no statistically significant age-related effects within either group. The CBAAD demonstrated sensitivity to attentional dysfunction in the TBI group, establishing its value as a comprehensive test battery for assessing attention in this specific population. Regression analyses demonstrated the CBAAD’s effectiveness in predicting real-life attentional errors reported by TBI patients. In summary, the CBAAD demonstrates sensitivity to attentional dysfunction in TBI patients and the ability to predict real-world attentional errors, establishing its value as a comprehensive test battery for assessing attention in this specific population. Its implementation holds promise for enhancing the early identification of attentional impairments and facilitating tailored rehabilitation strategies for TBI patients.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139389196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}