Tele-guidance systems for the remote monitoring and maintenance of equipment have been extensively investigated. Such systems enable a remote helper to provide guidance to a local worker while perceiving local conditions. In this study, we propose a tele-guidance system that supports the anticipation of an interlocutor’s actions during communication. Our proposed system enables a helper and worker to anticipate each other’s actions by allowing them to move around in the workspace freely and observe each other’s non-verbal cues (e.g., body motions and other gestures) through a head-mounted display. We conducted an experiment to compare the effectiveness of our proposed method with that of existing methods (a simple tele-pointer) that support anticipation during communication.
{"title":"Tele-Guidance System to Support Anticipation during Communication","authors":"T. Yamamoto, M. Otsuki, H. Kuzuoka, Yusuke Suzuki","doi":"10.3390/MTI2030055","DOIUrl":"https://doi.org/10.3390/MTI2030055","url":null,"abstract":"Tele-guidance systems for the remote monitoring and maintenance of equipment have been extensively investigated. Such systems enable a remote helper to provide guidance to a local worker while perceiving local conditions. In this study, we propose a tele-guidance system that supports the anticipation of an interlocutor’s actions during communication. Our proposed system enables a helper and worker to anticipate each other’s actions by allowing them to move around in the workspace freely and observe each other’s non-verbal cues (e.g., body motions and other gestures) through a head-mounted display. We conducted an experiment to compare the effectiveness of our proposed method with that of existing methods (a simple tele-pointer) that support anticipation during communication.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"2 1","pages":"55"},"PeriodicalIF":2.5,"publicationDate":"2018-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tangible technologies are considered promising tools for learning, by enabling multimodal interaction through physical action and manipulation of physical and digital elements, thus facilitating representational concrete–abstract links. A key concept in a tangible system is that its physical components are objects of interest, with associated meanings relevant to the context. Tangible technologies are said to provide ‘natural’ mappings that employ spatial analogies and adhere to cultural standards, capitalising on people’s familiarity with the physical world. Students with intellectual disabilities particularly benefit from interaction with tangibles, given their difficulties with perception and abstraction. However, symbolic information does not always have an obvious physical equivalent, and meanings do not reside in the representations used in the artefacts themselves, but in the ways they are manipulated and interpreted. In educational contexts, meaning attached to artefacts by designers is not necessarily transparent to students, nor interpreted by them as the designer predicted. Using artefacts and understanding their significance is of utmost importance for the construction of knowledge within the learning process; hence the need to study the use of the artefacts in contexts of practice and how they are transformed by the students. This article discusses how children with intellectual disabilities conceptually interpreted the elements of four tangible artefacts, and which characteristics of these tangibles were key for productive, multimodal interaction, thus potentially guiding designers and educators. Analysis shows the importance of designing physical-digital semantic mappings that capitalise on conceptual metaphors related to children’s familiar contexts, rather than using more abstract representations. Such metaphorical connections, preferably building on physical properties, contribute to children’s comprehension and facilitate their exploration of the systems.
{"title":"Tangible Representational Properties: Implications for Meaning Making","authors":"Taciana Pontual Falcão","doi":"10.3390/mti2030054","DOIUrl":"https://doi.org/10.3390/mti2030054","url":null,"abstract":"Tangible technologies are considered promising tools for learning, by enabling multimodal interaction through physical action and manipulation of physical and digital elements, thus facilitating representational concrete–abstract links. A key concept in a tangible system is that its physical components are objects of interest, with associated meanings relevant to the context. Tangible technologies are said to provide ‘natural’ mappings that employ spatial analogies and adhere to cultural standards, capitalising on people’s familiarity with the physical world. Students with intellectual disabilities particularly benefit from interaction with tangibles, given their difficulties with perception and abstraction. However, symbolic information does not always have an obvious physical equivalent, and meanings do not reside in the representations used in the artefacts themselves, but in the ways they are manipulated and interpreted. In educational contexts, meaning attached to artefacts by designers is not necessarily transparent to students, nor interpreted by them as the designer predicted. Using artefacts and understanding their significance is of utmost importance for the construction of knowledge within the learning process; hence the need to study the use of the artefacts in contexts of practice and how they are transformed by the students. This article discusses how children with intellectual disabilities conceptually interpreted the elements of four tangible artefacts, and which characteristics of these tangibles were key for productive, multimodal interaction, thus potentially guiding designers and educators. Analysis shows the importance of designing physical-digital semantic mappings that capitalise on conceptual metaphors related to children’s familiar contexts, rather than using more abstract representations. Such metaphorical connections, preferably building on physical properties, contribute to children’s comprehension and facilitate their exploration of the systems.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/mti2030054","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45224953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this paper is to review the scholarly works regarding social embodiment aligned with the design of non-player characters in virtual reality (VR)-based social skill training for autistic children. VR-based social skill training for autistic children has been a naturalistic environment, which allows autistic children themselves to shape socially-appropriate behaviors in real world. To build up the training environment for autistic children, it is necessary to identify how to simulate social components in the training. In particular, designing non-player characters (NPCs) in the training is essential to determining the quality of the simulated social interactions during the training. Through this literature review, this study proposes multiple design themes that underline the nature of social embodiment in which interactions with NPCs in VR-based social skill training take place.
{"title":"Reviews of Social Embodiment for Design of Non-Player Characters in Virtual Reality-Based Social Skill Training for Autistic Children","authors":"Jewoong Moon","doi":"10.3390/MTI2030053","DOIUrl":"https://doi.org/10.3390/MTI2030053","url":null,"abstract":"The purpose of this paper is to review the scholarly works regarding social embodiment aligned with the design of non-player characters in virtual reality (VR)-based social skill training for autistic children. VR-based social skill training for autistic children has been a naturalistic environment, which allows autistic children themselves to shape socially-appropriate behaviors in real world. To build up the training environment for autistic children, it is necessary to identify how to simulate social components in the training. In particular, designing non-player characters (NPCs) in the training is essential to determining the quality of the simulated social interactions during the training. Through this literature review, this study proposes multiple design themes that underline the nature of social embodiment in which interactions with NPCs in VR-based social skill training take place.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"2 1","pages":"53"},"PeriodicalIF":2.5,"publicationDate":"2018-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social robots are being designed to help support people’s well-being in domestic and public environments. To address increasing incidences of psychological and emotional difficulties such as loneliness, and a shortage of human healthcare workers, we believe that robots will also play a useful role in engaging with people in therapy, on an emotional and creative level, e.g., in music, drama, playing, and art therapy. Here, we focus on the latter case, on an autonomous robot capable of painting with a person. A challenge is that the theoretical foundations are highly complex; we are only just beginning ourselves to understand emotions and creativity in human science, which have been described as highly important challenges in artificial intelligence. To gain insight, we review some of the literature on robots used for therapy and art, potential strategies for interacting, and mechanisms for expressing emotions and creativity. In doing so, we also suggest the usefulness of the responsive art approach as a starting point for art therapy robots, describe a perceived gap between our understanding of emotions in human science and what is currently typically being addressed in engineering studies, and identify some potential ethical pitfalls and solutions for avoiding them. Based on our arguments, we propose a design for an art therapy robot, also discussing a simplified prototype implementation, toward informing future work in the area.
{"title":"Design for an Art Therapy Robot: An Explorative Review of the Theoretical Foundations for Engaging in Emotional and Creative Painting with a Robot","authors":"M. Cooney, M. Menezes","doi":"10.3390/MTI2030052","DOIUrl":"https://doi.org/10.3390/MTI2030052","url":null,"abstract":"Social robots are being designed to help support people’s well-being in domestic and public environments. To address increasing incidences of psychological and emotional difficulties such as loneliness, and a shortage of human healthcare workers, we believe that robots will also play a useful role in engaging with people in therapy, on an emotional and creative level, e.g., in music, drama, playing, and art therapy. Here, we focus on the latter case, on an autonomous robot capable of painting with a person. A challenge is that the theoretical foundations are highly complex; we are only just beginning ourselves to understand emotions and creativity in human science, which have been described as highly important challenges in artificial intelligence. To gain insight, we review some of the literature on robots used for therapy and art, potential strategies for interacting, and mechanisms for expressing emotions and creativity. In doing so, we also suggest the usefulness of the responsive art approach as a starting point for art therapy robots, describe a perceived gap between our understanding of emotions in human science and what is currently typically being addressed in engineering studies, and identify some potential ethical pitfalls and solutions for avoiding them. Based on our arguments, we propose a design for an art therapy robot, also discussing a simplified prototype implementation, toward informing future work in the area.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"2 1","pages":"52"},"PeriodicalIF":2.5,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030052","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of musical instruments and interfaces that involve animals in the interaction process is an emerging, yet not widespread practice. The projects that have been implemented in this unusual field are raising questions concerning ethical principles, animal-centered design processes, and the possible benefits and risks for the animals involved. Animal–Computer Interaction is a novel field of research that offers a framework (ACI manifesto) for implementing interactive technology for animals. Based on this framework, we have examined several projects focusing on the interplay between animals and music technology in order to arrive at a better understanding of animal-based musical projects. Building on this, we will discuss how the implementation of new musical instruments and interfaces could provide new opportunities for improving the quality of life for grey parrots living in captivity.
{"title":"Animals Make Music: A Look at Non-Human Musical Expression","authors":"Reinhard Gupfinger, Martin Kaltenbrunner","doi":"10.3390/MTI2030051","DOIUrl":"https://doi.org/10.3390/MTI2030051","url":null,"abstract":"The use of musical instruments and interfaces that involve animals in the interaction process is an emerging, yet not widespread practice. The projects that have been implemented in this unusual field are raising questions concerning ethical principles, animal-centered design processes, and the possible benefits and risks for the animals involved. Animal–Computer Interaction is a novel field of research that offers a framework (ACI manifesto) for implementing interactive technology for animals. Based on this framework, we have examined several projects focusing on the interplay between animals and music technology in order to arrive at a better understanding of animal-based musical projects. Building on this, we will discuss how the implementation of new musical instruments and interfaces could provide new opportunities for improving the quality of life for grey parrots living in captivity.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"2 1","pages":"51"},"PeriodicalIF":2.5,"publicationDate":"2018-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.
{"title":"Deep Learning and Medical Diagnosis: A Review of Literature","authors":"Mihalj Bakator, D. Radosav","doi":"10.3390/MTI2030047","DOIUrl":"https://doi.org/10.3390/MTI2030047","url":null,"abstract":"In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"2 1","pages":"47"},"PeriodicalIF":2.5,"publicationDate":"2018-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The impact of an aging population on healthcare and the sustainability of our healthcare system are pressing issues in contemporary society. Technology has the potential to address these challenges, alleviating pressures on the healthcare system and empowering individuals to have greater control over monitoring their own health. Importantly, mobile devices such as smartphones and tablets can allow older adults to have “on the go” access to health-related information. This paper explores mobile health apps that enable older adults and those who care for them to track health-related factors such as body readings and medication adherence, and it serves as a review of the literature on the usability and acceptance of mobile health apps in an older population.
{"title":"Technology for Remote Health Monitoring in an Older Population: A Role for Mobile Devices","authors":"Kate Dupuis, L. Tsotsos","doi":"10.3390/MTI2030043","DOIUrl":"https://doi.org/10.3390/MTI2030043","url":null,"abstract":"The impact of an aging population on healthcare and the sustainability of our healthcare system are pressing issues in contemporary society. Technology has the potential to address these challenges, alleviating pressures on the healthcare system and empowering individuals to have greater control over monitoring their own health. Importantly, mobile devices such as smartphones and tablets can allow older adults to have “on the go” access to health-related information. This paper explores mobile health apps that enable older adults and those who care for them to track health-related factors such as body readings and medication adherence, and it serves as a review of the literature on the usability and acceptance of mobile health apps in an older population.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"51 1","pages":"43"},"PeriodicalIF":2.5,"publicationDate":"2018-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric interaction that effectively supports salient foundations of mathematical concepts is challenging and requires understanding of opportunities and challenges that bodily interaction offers. This study aimed to better understand how young children can, and do, use their bodies to explore geometrical concepts of angle and shape, and what contribution the different sensori-motor experiences make to the comprehension of mathematical ideas. Twenty-nine students aged 6–10 years participated in an exploratory study, with paired and group activities designed to elicit intuitive bodily enactment of angles and shape. Our analysis, focusing on moment-by-moment bodily interactions, attended to gesture, action, facial expression, body posture and talk, illustrated the ‘realms of possibilities’ of bodily interaction, and highlighted challenges around ‘felt’ experience and egocentric vs. allocentric perception of the body during collaborative bodily enactment. These findings inform digital designs for sensory interaction to foreground salient geometric features and effectively support relevant forms of enactment to enhance the learning experience, supporting challenging aspects of interaction and exploiting the opportunities of the body.
{"title":"Opportunities and Challenges of Bodily Interaction for Geometry Learning to Inform Technology Design","authors":"S. Price, S. Duffy","doi":"10.3390/MTI2030041","DOIUrl":"https://doi.org/10.3390/MTI2030041","url":null,"abstract":"An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric interaction that effectively supports salient foundations of mathematical concepts is challenging and requires understanding of opportunities and challenges that bodily interaction offers. This study aimed to better understand how young children can, and do, use their bodies to explore geometrical concepts of angle and shape, and what contribution the different sensori-motor experiences make to the comprehension of mathematical ideas. Twenty-nine students aged 6–10 years participated in an exploratory study, with paired and group activities designed to elicit intuitive bodily enactment of angles and shape. Our analysis, focusing on moment-by-moment bodily interactions, attended to gesture, action, facial expression, body posture and talk, illustrated the ‘realms of possibilities’ of bodily interaction, and highlighted challenges around ‘felt’ experience and egocentric vs. allocentric perception of the body during collaborative bodily enactment. These findings inform digital designs for sensory interaction to foreground salient geometric features and effectively support relevant forms of enactment to enhance the learning experience, supporting challenging aspects of interaction and exploiting the opportunities of the body.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"2 1","pages":"41"},"PeriodicalIF":2.5,"publicationDate":"2018-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Kobayashi, Keijiro Nakagawa, K. Makiyama, Yuta Sasaki, Hiromi Kudo, Baburam Niraula, K. Sezaki
We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas in a safe and cost-effective manner. To substantially prolong the life of a sensor node, the proposed mechanism activates the communication capabilities only when there is a plurality of animals; otherwise, the sensor node remains in a sleep state. This study aimed to achieve three objectives. First, we intend to obtain knowledge based on the actual field operations within the Fukushima exclusion zone. Second, we attempt to realize an objective evaluation of the power supply and work base that is required to properly evaluate the proposed mechanism. Third, we intend to acquire data to support wildlife research, which is the objective of both our present (and future) research.
{"title":"Animal-to-Animal Data Sharing Mechanism for Wildlife Monitoring in Fukushima Exclusion Zone","authors":"H. Kobayashi, Keijiro Nakagawa, K. Makiyama, Yuta Sasaki, Hiromi Kudo, Baburam Niraula, K. Sezaki","doi":"10.3390/MTI2030040","DOIUrl":"https://doi.org/10.3390/MTI2030040","url":null,"abstract":"We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas in a safe and cost-effective manner. To substantially prolong the life of a sensor node, the proposed mechanism activates the communication capabilities only when there is a plurality of animals; otherwise, the sensor node remains in a sleep state. This study aimed to achieve three objectives. First, we intend to obtain knowledge based on the actual field operations within the Fukushima exclusion zone. Second, we attempt to realize an objective evaluation of the power supply and work base that is required to properly evaluate the proposed mechanism. Third, we intend to acquire data to support wildlife research, which is the objective of both our present (and future) research.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"2 1","pages":"40"},"PeriodicalIF":2.5,"publicationDate":"2018-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable multimodal platforms that respond to students’ natural physical activity such as their gestures. This study examines how students engaged with an embodied mixed-reality science learning simulation using advanced gesture recognition techniques to support full-body interaction. The simulation environment acts as a communication platform for students to articulate their understanding of non-linear growth within different science contexts. In particular, this study investigates the different multimodal interaction metrics that were generated as students attempted to make sense of cross-cutting science concepts through using a personalized gesture scheme. Starting with video recordings of students’ full-body gestures, we examined the relationship between these embodied expressions and their subsequent success reasoning about non-linear growth. We report the patterns that we identified, and explicate our findings by detailing a few insightful cases of student interactions. Implications for the design of multimodal interaction technologies and the metrics that were used to investigate different types of students’ interactions while learning are discussed.
{"title":"Exploring Emergent Features of Student Interaction within an Embodied Science Learning Simulation","authors":"Jina Kang, Robb Lindgren, James Planey","doi":"10.3390/MTI2030039","DOIUrl":"https://doi.org/10.3390/MTI2030039","url":null,"abstract":"Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable multimodal platforms that respond to students’ natural physical activity such as their gestures. This study examines how students engaged with an embodied mixed-reality science learning simulation using advanced gesture recognition techniques to support full-body interaction. The simulation environment acts as a communication platform for students to articulate their understanding of non-linear growth within different science contexts. In particular, this study investigates the different multimodal interaction metrics that were generated as students attempted to make sense of cross-cutting science concepts through using a personalized gesture scheme. Starting with video recordings of students’ full-body gestures, we examined the relationship between these embodied expressions and their subsequent success reasoning about non-linear growth. We report the patterns that we identified, and explicate our findings by detailing a few insightful cases of student interactions. Implications for the design of multimodal interaction technologies and the metrics that were used to investigate different types of students’ interactions while learning are discussed.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"2 1","pages":"39"},"PeriodicalIF":2.5,"publicationDate":"2018-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}