Humans are considered to be communicative, usually interacting in dyads or groups. In this paper, we investigate group interactions regarding performance in a rather formal gathering. In particular, a collection of ten performance indicators used in social group sciences is used to assess the outcomes of the meetings in this manuscript, in an automatic, machine learning-based way. For this, the Parking Lot Corpus, comprising 70 meetings in total, is analysed. At first, we obtain baseline results for the automatic prediction of performance results on the corpus. This is the first time the Parking Lot Corpus is tapped in this sense. Additionally, we compare baseline values to those obtained, utilising bidirectional long-short term memories. For multiple performance indicators, improvements in the baseline results are able to be achieved. Furthermore, the experiments showed a trend that the acoustic material of the remaining group should use for the prediction of team performance.
{"title":"Group Leader vs. Remaining Group—Whose Data Should Be Used for Prediction of Team Performance?","authors":"Ronald Böck","doi":"10.3390/mti7090090","DOIUrl":"https://doi.org/10.3390/mti7090090","url":null,"abstract":"Humans are considered to be communicative, usually interacting in dyads or groups. In this paper, we investigate group interactions regarding performance in a rather formal gathering. In particular, a collection of ten performance indicators used in social group sciences is used to assess the outcomes of the meetings in this manuscript, in an automatic, machine learning-based way. For this, the Parking Lot Corpus, comprising 70 meetings in total, is analysed. At first, we obtain baseline results for the automatic prediction of performance results on the corpus. This is the first time the Parking Lot Corpus is tapped in this sense. Additionally, we compare baseline values to those obtained, utilising bidirectional long-short term memories. For multiple performance indicators, improvements in the baseline results are able to be achieved. Furthermore, the experiments showed a trend that the acoustic material of the remaining group should use for the prediction of team performance.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134913461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented Reality (AR) is increasingly present in several fields, including the museological space, where the challenges of presenting objects interactively and attractively are constant, especially with the sociocultural changes of recent decades. Although there are numerous studies on AR in museums, the perspective of museum professionals on the technology still needs to be explored. Thus, in this study, we use a qualitative design and conduct in-depth interviews with professionals from 10 Portuguese museums involved in creating or applying AR within these environments. Applying the grounded theory, the researchers propose a framework to understand Portuguese museum professionals’ practices, perceptions, and experiences with AR in museum environments. The findings allow the creation of a theoretical framework divided into four levels, namely the perceptions of museum professionals on the role and use of AR, the understanding of departments, museum teams, and digital strategies, the perceived challenges, limitations, and advantages in the use of augmented reality technologies, and the future perspectives of AR in museums. The theory resulting from this study may also contribute suggestions for the design and implementation of AR in museums, which both museum professionals and designers can use.
{"title":"Augmented Reality in Portuguese Museums: A Grounded Theory Study on the Museum Professionals’ Perspectives","authors":"Natacha Fernandes, Joana Casteleiro-Pitrez","doi":"10.3390/mti7090087","DOIUrl":"https://doi.org/10.3390/mti7090087","url":null,"abstract":"Augmented Reality (AR) is increasingly present in several fields, including the museological space, where the challenges of presenting objects interactively and attractively are constant, especially with the sociocultural changes of recent decades. Although there are numerous studies on AR in museums, the perspective of museum professionals on the technology still needs to be explored. Thus, in this study, we use a qualitative design and conduct in-depth interviews with professionals from 10 Portuguese museums involved in creating or applying AR within these environments. Applying the grounded theory, the researchers propose a framework to understand Portuguese museum professionals’ practices, perceptions, and experiences with AR in museum environments. The findings allow the creation of a theoretical framework divided into four levels, namely the perceptions of museum professionals on the role and use of AR, the understanding of departments, museum teams, and digital strategies, the perceived challenges, limitations, and advantages in the use of augmented reality technologies, and the future perspectives of AR in museums. The theory resulting from this study may also contribute suggestions for the design and implementation of AR in museums, which both museum professionals and designers can use.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135979419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality offers many artistic possibilities when it comes to the creation of place-based public artworks. In this paper, we present a series of works around the topic of augmented reality (AR) art and place-based storytelling, including the use of walking as a creative method, a series of workshops with emerging artists, public AR art collaborations and a study to examine user experience when interacting with such artworks. Our findings from these works show the potential of integrating augmented reality with public physical artworks and offer guidance to artists and AR developers on how to expand this potential. For artists, we show the importance of the space in which the artwork will be placed and provide guidance on how to work with the space. For developers, we find that there is a need to create tools that work with artists’ existing practices and to investigate how to expand augmented reality past the limitations of site- or piece-specific apps.
{"title":"An Investigation of the Use of Augmented Reality in Public Art","authors":"Tamlyn Young, Mark T. Marshall","doi":"10.3390/mti7090089","DOIUrl":"https://doi.org/10.3390/mti7090089","url":null,"abstract":"Augmented reality offers many artistic possibilities when it comes to the creation of place-based public artworks. In this paper, we present a series of works around the topic of augmented reality (AR) art and place-based storytelling, including the use of walking as a creative method, a series of workshops with emerging artists, public AR art collaborations and a study to examine user experience when interacting with such artworks. Our findings from these works show the potential of integrating augmented reality with public physical artworks and offer guidance to artists and AR developers on how to expand this potential. For artists, we show the importance of the space in which the artwork will be placed and provide guidance on how to work with the space. For developers, we find that there is a need to create tools that work with artists’ existing practices and to investigate how to expand augmented reality past the limitations of site- or piece-specific apps.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136024573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Georg Regal, Daniele Pretolesi, Helmut Schrom-Feiertag, Jaison Puthenkalam, Massimo Migliorini, Elios De Maio, Francesca Scarrone, Marina Nadalin, Massimiliano Guarneri, Grace P. Xerri, Daniele Di Giovanni, Paola Tessari, Federica Genna, Andrea D’Angelo, Markus Murtinger
The contemporary geopolitical environment and strategic uncertainty shaped by asymmetric and hybrid threats urge the future development of hands-on training in realistic environments. Training in immersive, virtual environments is a promising approach. Immersive training can support training for contexts that are otherwise hard to access, dangerous, or have high costs. This paper discusses the challenges for virtual reality training in the CBRN (chemical, biological, radioactive, nuclear) domain. Based on initial considerations and a literature review, we conducted a survey and three workshops to gather requirements for CBRN training in virtual environments. We structured the gathered insights into four overarching themes—the future of CBRN training, ethical and safety requirements, evaluation and feedback, and tangible objects and tools. We provide insights on these four themes and discuss recommendations.
{"title":"Challenges in Virtual Reality Training for CBRN Events","authors":"Georg Regal, Daniele Pretolesi, Helmut Schrom-Feiertag, Jaison Puthenkalam, Massimo Migliorini, Elios De Maio, Francesca Scarrone, Marina Nadalin, Massimiliano Guarneri, Grace P. Xerri, Daniele Di Giovanni, Paola Tessari, Federica Genna, Andrea D’Angelo, Markus Murtinger","doi":"10.3390/mti7090088","DOIUrl":"https://doi.org/10.3390/mti7090088","url":null,"abstract":"The contemporary geopolitical environment and strategic uncertainty shaped by asymmetric and hybrid threats urge the future development of hands-on training in realistic environments. Training in immersive, virtual environments is a promising approach. Immersive training can support training for contexts that are otherwise hard to access, dangerous, or have high costs. This paper discusses the challenges for virtual reality training in the CBRN (chemical, biological, radioactive, nuclear) domain. Based on initial considerations and a literature review, we conducted a survey and three workshops to gather requirements for CBRN training in virtual environments. We structured the gathered insights into four overarching themes—the future of CBRN training, ethical and safety requirements, evaluation and feedback, and tangible objects and tools. We provide insights on these four themes and discuss recommendations.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135937862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents a comparative analysis of four low or no-code location-based game (LBG) authoring tools, namely Taleblazer, Aris, Actionbound, and Locatify. Each tool is examined in detail, with an emphasis on the functions and capabilities it provides for the development of LBGs. The article builds on the history and purpose of LBGs, their characteristics, as well as basic concepts and previous applications, placing emphasis both on the technological and pedagogical dimensions of these games. The evaluation of the tools is based on certain criteria, or metrics, recorded in the literature and empirical data collected through the development of prototype games for each tool. The tools are comparatively analyzed in terms of the LBG’s constituent features they incorporate, the fundamental and additional functionality provided to the developer, as well as the existence or absence of features that captivate players in the game experience. Moreover, feedback is provided based on the practical use of the platforms for developing LBGs in order to support prospective developers in making an informed choice of an LBG platform for implementing a specific game. The games were created by taking advantage of as many features of the tools as possible in order to have a more fair and complete evaluation. This study aims to highlight the affordances and limitations of the investigated low or no-code LBG authoring tools, enabling anyone interested in developing an LBG to choose the most appropriate tool taking into account their needs and technological background or designing their own LBG authoring tools.
{"title":"A Comparative Analysis of Low or No-Code Authoring Tools for Location-Based Games","authors":"Christos Batsaras, S. Xinogalos","doi":"10.3390/mti7090086","DOIUrl":"https://doi.org/10.3390/mti7090086","url":null,"abstract":"This article presents a comparative analysis of four low or no-code location-based game (LBG) authoring tools, namely Taleblazer, Aris, Actionbound, and Locatify. Each tool is examined in detail, with an emphasis on the functions and capabilities it provides for the development of LBGs. The article builds on the history and purpose of LBGs, their characteristics, as well as basic concepts and previous applications, placing emphasis both on the technological and pedagogical dimensions of these games. The evaluation of the tools is based on certain criteria, or metrics, recorded in the literature and empirical data collected through the development of prototype games for each tool. The tools are comparatively analyzed in terms of the LBG’s constituent features they incorporate, the fundamental and additional functionality provided to the developer, as well as the existence or absence of features that captivate players in the game experience. Moreover, feedback is provided based on the practical use of the platforms for developing LBGs in order to support prospective developers in making an informed choice of an LBG platform for implementing a specific game. The games were created by taking advantage of as many features of the tools as possible in order to have a more fair and complete evaluation. This study aims to highlight the affordances and limitations of the investigated low or no-code LBG authoring tools, enabling anyone interested in developing an LBG to choose the most appropriate tool taking into account their needs and technological background or designing their own LBG authoring tools.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47166522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This retrospective study presents and summarizes our long-term efforts in the popularization of robotics, engineering, and artificial intelligence (STEM) using the NAO humanoid robot. By a conservative estimate, over a span of 8 years, we engaged at least a couple of thousand participants: approximately 70% were preschool children, 15% were elementary school students, and 15% were teenagers and adults. We describe several robot applications that were developed specifically for this task and assess their qualitative performance outside a controlled research setting, catering to various demographics, including those with special needs (ASD, ADHD). Five groups of applications are presented: (1) motor development activities and games, (2) children’s games, (3) theatrical performances, (4) artificial intelligence applications, and (5) data harvesting applications. Different cases of human–robot interactions are considered and evaluated according to our experience, and we discuss their weak points and potential improvements. We examine the response of the audience when confronted with a humanoid robot featuring intelligent behavior, such as conversational intelligence and emotion recognition. We consider the importance of the robot’s physical appearance, the emotional dynamics of human–robot engagement across age groups, the relevance of non-verbal cues, and analyze drawings crafted by preschool children both before and after their interaction with the NAO robot.
{"title":"Can You Dance? A Study of Child–Robot Interaction and Emotional Response Using the NAO Robot","authors":"V. Podpečan","doi":"10.3390/mti7090085","DOIUrl":"https://doi.org/10.3390/mti7090085","url":null,"abstract":"This retrospective study presents and summarizes our long-term efforts in the popularization of robotics, engineering, and artificial intelligence (STEM) using the NAO humanoid robot. By a conservative estimate, over a span of 8 years, we engaged at least a couple of thousand participants: approximately 70% were preschool children, 15% were elementary school students, and 15% were teenagers and adults. We describe several robot applications that were developed specifically for this task and assess their qualitative performance outside a controlled research setting, catering to various demographics, including those with special needs (ASD, ADHD). Five groups of applications are presented: (1) motor development activities and games, (2) children’s games, (3) theatrical performances, (4) artificial intelligence applications, and (5) data harvesting applications. Different cases of human–robot interactions are considered and evaluated according to our experience, and we discuss their weak points and potential improvements. We examine the response of the audience when confronted with a humanoid robot featuring intelligent behavior, such as conversational intelligence and emotion recognition. We consider the importance of the robot’s physical appearance, the emotional dynamics of human–robot engagement across age groups, the relevance of non-verbal cues, and analyze drawings crafted by preschool children both before and after their interaction with the NAO robot.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43486497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Hutchcraft, R. Wallon, Shanna Fealy, Donovan Jones, R. Galvez
Integration of technology within problem-based learning curricula is expanding; however, information regarding student experiences and attitudes about the integration of such technologies is limited. This study aimed to evaluate pre-clinical medical student perceptions and use patterns of the “Road to Birth” (RtB) software, a novel program designed to support human maternal anatomy and physiology education. Second-year medical students at a large midwestern American university participated in a prospective, mixed-methods study. The RtB software is available as a mobile smartphone/tablet application and in immersive virtual reality. The program was integrated into problem-based learning activities across a three-week obstetrics teaching period. Student visuospatial ability, weekly program usage, weekly user satisfaction, and end-of-course focus group interview data were obtained. Survey data were analyzed and summarized using descriptive statistics. Focus group interview data were analyzed using inductive thematic analysis. Of the eligible students, 66% (19/29) consented to participate in the study with 4 students contributing to the focus group interview. Students reported incremental knowledge increases on weekly surveys (69.2% week one, 71.4% week two, and 78.6% week three). Qualitative results indicated the RtB software was perceived as a useful educational resource; however, its interactive nature could have been further optimized. Students reported increased use of portable devices over time and preferred convenient options when using technology incorporated into the curriculum. This study identifies opportunities to better integrate technology into problem-based learning practices in medical education. Further empirical research is warranted with larger and more diverse student samples.
{"title":"Evaluation of the Road to Birth Software to Support Obstetric Problem-Based Learning Education with a Cohort of Pre-Clinical Medical Students","authors":"M. Hutchcraft, R. Wallon, Shanna Fealy, Donovan Jones, R. Galvez","doi":"10.3390/mti7080084","DOIUrl":"https://doi.org/10.3390/mti7080084","url":null,"abstract":"Integration of technology within problem-based learning curricula is expanding; however, information regarding student experiences and attitudes about the integration of such technologies is limited. This study aimed to evaluate pre-clinical medical student perceptions and use patterns of the “Road to Birth” (RtB) software, a novel program designed to support human maternal anatomy and physiology education. Second-year medical students at a large midwestern American university participated in a prospective, mixed-methods study. The RtB software is available as a mobile smartphone/tablet application and in immersive virtual reality. The program was integrated into problem-based learning activities across a three-week obstetrics teaching period. Student visuospatial ability, weekly program usage, weekly user satisfaction, and end-of-course focus group interview data were obtained. Survey data were analyzed and summarized using descriptive statistics. Focus group interview data were analyzed using inductive thematic analysis. Of the eligible students, 66% (19/29) consented to participate in the study with 4 students contributing to the focus group interview. Students reported incremental knowledge increases on weekly surveys (69.2% week one, 71.4% week two, and 78.6% week three). Qualitative results indicated the RtB software was perceived as a useful educational resource; however, its interactive nature could have been further optimized. Students reported increased use of portable devices over time and preferred convenient options when using technology incorporated into the curriculum. This study identifies opportunities to better integrate technology into problem-based learning practices in medical education. Further empirical research is warranted with larger and more diverse student samples.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42016473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Víctor Martínez-Sánchez, Iván Villalón-Turrubiates, Francisco Cervantes-Álvarez, C. Hernández-Mejía
This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon of 100 gestures and 5000 videos from three participants with different grammatical elements. Additionally, the dataset is evaluated in a two-step neural network model as having an accuracy greater than 99% and thus serves as a benchmark for future training of machine learning models in computer vision systems. Finally, this research provides an inclusive environment within society and organizations, in particular for people with hearing impairments.
{"title":"Exploring a Novel Mexican Sign Language Lexicon Video Dataset","authors":"Víctor Martínez-Sánchez, Iván Villalón-Turrubiates, Francisco Cervantes-Álvarez, C. Hernández-Mejía","doi":"10.3390/mti7080083","DOIUrl":"https://doi.org/10.3390/mti7080083","url":null,"abstract":"This research explores a novel Mexican Sign Language (MSL) lexicon video dataset containing the dynamic gestures most frequently used in MSL. Each gesture consists of a set of different versions of videos under uncontrolled conditions. The MX-ITESO-100 dataset is composed of a lexicon of 100 gestures and 5000 videos from three participants with different grammatical elements. Additionally, the dataset is evaluated in a two-step neural network model as having an accuracy greater than 99% and thus serves as a benchmark for future training of machine learning models in computer vision systems. Finally, this research provides an inclusive environment within society and organizations, in particular for people with hearing impairments.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44469915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Dongas, Kazjon Grace, Samuel Gillespie, Marius Hoggenmueller, M. Tomitsch, Stewart Worrall
In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged.
{"title":"Virtual Urban Field Studies: Evaluating Urban Interaction Design Using Context-Based Interface Prototypes","authors":"Robert Dongas, Kazjon Grace, Samuel Gillespie, Marius Hoggenmueller, M. Tomitsch, Stewart Worrall","doi":"10.3390/mti7080082","DOIUrl":"https://doi.org/10.3390/mti7080082","url":null,"abstract":"In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46890537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design.
教育工作者和学生对生成式人工智能(AI)技术支持学生学习成果的潜力表现出极大的兴趣,例如,通过提供个性化体验、24小时对话辅助、文本编辑和解决问题的帮助。我们回顾了人工智能在教育背景下作为一种工具的价值的当代观点,并描述了我们最近对本科生的研究,讨论了我们为什么以及如何在2022-2023学年将OpenAI工具ChatGPT和Dall-E整合到课程中。伦敦城市大学(London Metropolitan University)计算与数字媒体学院(School of Computing and Digital Media)的一小群游戏编程学生接到了一项研究和开发任务,明确要求他们使用OpenAI。他们的任务是在游戏开发的背景下评估OpenAI工具,展示一个可行的解决方案,并报告他们的发现。我们提出了五个案例研究,展示了学生的一些成果,并讨论了他们的工作。这种评估模式既富有成效又受欢迎,它切合学生的兴趣,有助于提高他们在编程、解决问题、批判性反思和探索性设计方面的技能。
{"title":"Creative Use of OpenAI in Education: Case Studies from Game Development","authors":"Fiona French, David Levi, Csaba Maczo, Aiste Simonaityte, Stefanos Triantafyllidis, Gergo Varda","doi":"10.3390/mti7080081","DOIUrl":"https://doi.org/10.3390/mti7080081","url":null,"abstract":"Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46287141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}