Robert Dongas, Kazjon Grace, Samuel Gillespie, Marius Hoggenmueller, M. Tomitsch, Stewart Worrall
In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged.
{"title":"Virtual Urban Field Studies: Evaluating Urban Interaction Design Using Context-Based Interface Prototypes","authors":"Robert Dongas, Kazjon Grace, Samuel Gillespie, Marius Hoggenmueller, M. Tomitsch, Stewart Worrall","doi":"10.3390/mti7080082","DOIUrl":"https://doi.org/10.3390/mti7080082","url":null,"abstract":"In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46890537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design.
教育工作者和学生对生成式人工智能(AI)技术支持学生学习成果的潜力表现出极大的兴趣,例如,通过提供个性化体验、24小时对话辅助、文本编辑和解决问题的帮助。我们回顾了人工智能在教育背景下作为一种工具的价值的当代观点,并描述了我们最近对本科生的研究,讨论了我们为什么以及如何在2022-2023学年将OpenAI工具ChatGPT和Dall-E整合到课程中。伦敦城市大学(London Metropolitan University)计算与数字媒体学院(School of Computing and Digital Media)的一小群游戏编程学生接到了一项研究和开发任务,明确要求他们使用OpenAI。他们的任务是在游戏开发的背景下评估OpenAI工具,展示一个可行的解决方案,并报告他们的发现。我们提出了五个案例研究,展示了学生的一些成果,并讨论了他们的工作。这种评估模式既富有成效又受欢迎,它切合学生的兴趣,有助于提高他们在编程、解决问题、批判性反思和探索性设计方面的技能。
{"title":"Creative Use of OpenAI in Education: Case Studies from Game Development","authors":"Fiona French, David Levi, Csaba Maczo, Aiste Simonaityte, Stefanos Triantafyllidis, Gergo Varda","doi":"10.3390/mti7080081","DOIUrl":"https://doi.org/10.3390/mti7080081","url":null,"abstract":"Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46287141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiannis Georgiou, A. Hadjichambis, D. Paraskeva-Hadjichambi, A. Adamou
As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into “environmental agents of change”, seeking to achieve more sustainable lifestyles. Despite these arguments, studies in this area are thinly spread and fragmented across various research domains. This article is grounded on a systematic review of empirical articles on BCGs for EC covering a time span of fifteen years and published in peer-reviewed journals and conference proceedings, in order to provide an understanding of the scope of empirical research in the field. In total, 44 articles were reviewed to shed light on their methodological underpinnings, the gaming elements and the persuasive strategies of the deployed BCGs, the EC actions facilitated by the BCGs, and the impact of BCGs on players’ EC competences. Our findings indicate that while BCGs seem to promote pro-environmental knowledge and attitudes, such an assertion is not fully warranted for pro-environmental behaviors. We reflect on our findings and provide future research directions to push forward the field of BCGs for EC.
{"title":"“From Gamers into Environmental Citizens”: A Systematic Literature Review of Empirical Research on Behavior Change Games for Environmental Citizenship","authors":"Yiannis Georgiou, A. Hadjichambis, D. Paraskeva-Hadjichambi, A. Adamou","doi":"10.3390/mti7080080","DOIUrl":"https://doi.org/10.3390/mti7080080","url":null,"abstract":"As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into “environmental agents of change”, seeking to achieve more sustainable lifestyles. Despite these arguments, studies in this area are thinly spread and fragmented across various research domains. This article is grounded on a systematic review of empirical articles on BCGs for EC covering a time span of fifteen years and published in peer-reviewed journals and conference proceedings, in order to provide an understanding of the scope of empirical research in the field. In total, 44 articles were reviewed to shed light on their methodological underpinnings, the gaming elements and the persuasive strategies of the deployed BCGs, the EC actions facilitated by the BCGs, and the impact of BCGs on players’ EC competences. Our findings indicate that while BCGs seem to promote pro-environmental knowledge and attitudes, such an assertion is not fully warranted for pro-environmental behaviors. We reflect on our findings and provide future research directions to push forward the field of BCGs for EC.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46996311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Zlatev, J. Ilieva, D. Orozova, G. Shivacheva, Nadezhda Angelova
This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion of sound to color, supported by specialized automatic processing software suitable for standardization. Experimental evaluation shows excellent performance for frequencies below 1000 Hz, although limitations are encountered at higher frequencies, requiring further investigation into advanced noise filtering and hardware optimization. The device shows promise for various applications, including education, art, and therapy. The study acknowledges its limitations and suggests future research to generalize the models for converting sound frequencies to color and improving usability for a broader range of hearing impairments. Feedback from the hearing-impaired community will play a critical role in further developing the device for practical use. Overall, this innovative device for converting sound to color represents a significant step toward improving accessibility and communication for people with hearing challenges. Continued research offers the potential to overcome challenges and extend the benefits of the device to a variety of areas, ultimately improving the quality of life for people with hearing impairments.
{"title":"Design and Research of a Sound-to-RGB Smart Acoustic Device","authors":"Z. Zlatev, J. Ilieva, D. Orozova, G. Shivacheva, Nadezhda Angelova","doi":"10.3390/mti7080079","DOIUrl":"https://doi.org/10.3390/mti7080079","url":null,"abstract":"This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion of sound to color, supported by specialized automatic processing software suitable for standardization. Experimental evaluation shows excellent performance for frequencies below 1000 Hz, although limitations are encountered at higher frequencies, requiring further investigation into advanced noise filtering and hardware optimization. The device shows promise for various applications, including education, art, and therapy. The study acknowledges its limitations and suggests future research to generalize the models for converting sound frequencies to color and improving usability for a broader range of hearing impairments. Feedback from the hearing-impaired community will play a critical role in further developing the device for practical use. Overall, this innovative device for converting sound to color represents a significant step toward improving accessibility and communication for people with hearing challenges. Continued research offers the potential to overcome challenges and extend the benefits of the device to a variety of areas, ultimately improving the quality of life for people with hearing impairments.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46213482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation).
{"title":"Multimodal Interaction for Cobot Using MQTT","authors":"J. Rouillard, Jean-Marc Vannobel","doi":"10.3390/mti7080078","DOIUrl":"https://doi.org/10.3390/mti7080078","url":null,"abstract":"For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation).","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43729126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%.
{"title":"Enhancing Object Detection for VIPs Using YOLOv4_Resnet101 and Text-to-Speech Conversion Model","authors":"Tahani Jaser Alahmadi, Atta Ur Rahman, Hend Khalid Alkahtani, Hisham Kholidy","doi":"10.3390/mti7080077","DOIUrl":"https://doi.org/10.3390/mti7080077","url":null,"abstract":"Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136383096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01Epub Date: 2022-08-03DOI: 10.3390/mti6080067
Claire L Mitchell, Gabriel J Cler, Susan K Fager, Paola Contessa, Serge H Roy, Gianluca De Luca, Joshua C Kline, Jennifer M Vojtech
This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.
{"title":"Ability-Based Methods for Personalized Keyboard Generation.","authors":"Claire L Mitchell, Gabriel J Cler, Susan K Fager, Paola Contessa, Serge H Roy, Gianluca De Luca, Joshua C Kline, Jennifer M Vojtech","doi":"10.3390/mti6080067","DOIUrl":"10.3390/mti6080067","url":null,"abstract":"<p><p>This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.</p>","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"6 8","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9608338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40436065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Replica Project: Co-Designing a Discovery Engine for Digital Art History","authors":"I. D. Lenardo","doi":"10.3390/mti6110100","DOIUrl":"https://doi.org/10.3390/mti6110100","url":null,"abstract":"","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"6 1","pages":"100"},"PeriodicalIF":2.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acknowledgement to Reviewers of MTI in 2019","authors":"Mti Editorial Office","doi":"10.3390/mti4010002","DOIUrl":"https://doi.org/10.3390/mti4010002","url":null,"abstract":"","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"4 1","pages":"2"},"PeriodicalIF":2.5,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/mti4010002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The integration of clickers in Higher Education settings has proved to be particularly useful for enhancing motivation, engagement and performance; for developing cooperative or collaborative tasks; for checking understanding during the lesson; or even for assessment purposes. This paper explores and exemplifies three uses of Socrative, a mobile application specifically designed as a clicker for the classroom. Socrative was used during three sessions with the same group of first-year University students at a Faculty of Education. One of these sessions—a review lesson—was gamified, whereas the other two—a collaborative reading activity seminar, and a lecture—were not. Ad-hoc questionnaires were distributed after each of them. Results suggest that students welcome the use of clickers and that combining them with gamification strategies may increase students’ perceived satisfaction. The experiences described in this paper show how Socrative is an effective means of providing formative feedback and may actually save time during lessons.
{"title":"Socrative in Higher Education: Game vs. Other Uses","authors":"Fátima Faya Cerqueiro, Anastasia Harrison","doi":"10.3390/MTI3030049","DOIUrl":"https://doi.org/10.3390/MTI3030049","url":null,"abstract":"The integration of clickers in Higher Education settings has proved to be particularly useful for enhancing motivation, engagement and performance; for developing cooperative or collaborative tasks; for checking understanding during the lesson; or even for assessment purposes. This paper explores and exemplifies three uses of Socrative, a mobile application specifically designed as a clicker for the classroom. Socrative was used during three sessions with the same group of first-year University students at a Faculty of Education. One of these sessions—a review lesson—was gamified, whereas the other two—a collaborative reading activity seminar, and a lecture—were not. Ad-hoc questionnaires were distributed after each of them. Results suggest that students welcome the use of clickers and that combining them with gamification strategies may increase students’ perceived satisfaction. The experiences described in this paper show how Socrative is an effective means of providing formative feedback and may actually save time during lessons.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"3 1","pages":"49"},"PeriodicalIF":2.5,"publicationDate":"2019-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI3030049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}