首页 > 最新文献

Multimodal Technologies and Interaction最新文献

英文 中文
“From Gamers into Environmental Citizens”: A Systematic Literature Review of Empirical Research on Behavior Change Games for Environmental Citizenship “从博弈者到环境公民”:环境公民行为变化博弈实证研究的系统文献综述
IF 2.5 Q2 Computer Science Pub Date : 2023-08-14 DOI: 10.3390/mti7080080
Yiannis Georgiou, A. Hadjichambis, D. Paraskeva-Hadjichambi, A. Adamou
As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into “environmental agents of change”, seeking to achieve more sustainable lifestyles. Despite these arguments, studies in this area are thinly spread and fragmented across various research domains. This article is grounded on a systematic review of empirical articles on BCGs for EC covering a time span of fifteen years and published in peer-reviewed journals and conference proceedings, in order to provide an understanding of the scope of empirical research in the field. In total, 44 articles were reviewed to shed light on their methodological underpinnings, the gaming elements and the persuasive strategies of the deployed BCGs, the EC actions facilitated by the BCGs, and the impact of BCGs on players’ EC competences. Our findings indicate that while BCGs seem to promote pro-environmental knowledge and attitudes, such an assertion is not fully warranted for pro-environmental behaviors. We reflect on our findings and provide future research directions to push forward the field of BCGs for EC.
随着全球环境危机的加剧,人们对行为改变游戏(bcg)产生了浓厚的兴趣,因为它是一种授权玩家支持环境保护主义的可行场所。这种支持环境的授权与环境公民的概念非常一致,其目的是将公民转变为“环境变革的推动者”,寻求实现更可持续的生活方式。尽管存在这些争论,但这一领域的研究在不同的研究领域中分散开来。本文基于对15年来发表在同行评议期刊和会议论文集上的关于EC的BCGs的实证文章的系统综述,以提供对该领域实证研究范围的理解。总共有44篇文章进行了回顾,以阐明他们的方法基础,部署的游戏元素和说服策略,由BCGs促进的EC行动,以及BCGs对玩家EC能力的影响。我们的研究结果表明,虽然BCGs似乎促进了亲环境知识和态度,但这种说法并不完全适用于亲环境行为。我们对研究结果进行了反思,并提出了未来的研究方向,以推动BCGs在EC领域的发展。
{"title":"“From Gamers into Environmental Citizens”: A Systematic Literature Review of Empirical Research on Behavior Change Games for Environmental Citizenship","authors":"Yiannis Georgiou, A. Hadjichambis, D. Paraskeva-Hadjichambi, A. Adamou","doi":"10.3390/mti7080080","DOIUrl":"https://doi.org/10.3390/mti7080080","url":null,"abstract":"As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into “environmental agents of change”, seeking to achieve more sustainable lifestyles. Despite these arguments, studies in this area are thinly spread and fragmented across various research domains. This article is grounded on a systematic review of empirical articles on BCGs for EC covering a time span of fifteen years and published in peer-reviewed journals and conference proceedings, in order to provide an understanding of the scope of empirical research in the field. In total, 44 articles were reviewed to shed light on their methodological underpinnings, the gaming elements and the persuasive strategies of the deployed BCGs, the EC actions facilitated by the BCGs, and the impact of BCGs on players’ EC competences. Our findings indicate that while BCGs seem to promote pro-environmental knowledge and attitudes, such an assertion is not fully warranted for pro-environmental behaviors. We reflect on our findings and provide future research directions to push forward the field of BCGs for EC.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46996311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Research of a Sound-to-RGB Smart Acoustic Device 一种声转RGB智能声学器件的设计与研究
IF 2.5 Q2 Computer Science Pub Date : 2023-08-13 DOI: 10.3390/mti7080079
Z. Zlatev, J. Ilieva, D. Orozova, G. Shivacheva, Nadezhda Angelova
This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion of sound to color, supported by specialized automatic processing software suitable for standardization. Experimental evaluation shows excellent performance for frequencies below 1000 Hz, although limitations are encountered at higher frequencies, requiring further investigation into advanced noise filtering and hardware optimization. The device shows promise for various applications, including education, art, and therapy. The study acknowledges its limitations and suggests future research to generalize the models for converting sound frequencies to color and improving usability for a broader range of hearing impairments. Feedback from the hearing-impaired community will play a critical role in further developing the device for practical use. Overall, this innovative device for converting sound to color represents a significant step toward improving accessibility and communication for people with hearing challenges. Continued research offers the potential to overcome challenges and extend the benefits of the device to a variety of areas, ultimately improving the quality of life for people with hearing impairments.
本文介绍了一种将声波频率转换为颜色的设备,以帮助有听力问题的人解决听障社区的无障碍和沟通问题。该设备使用精确的数学仪器和精心选择的硬件,在适合标准化的专业自动处理软件的支持下,实现声音到颜色的精确转换。实验评估显示,在1000Hz以下的频率下具有优异的性能,尽管在更高的频率下会遇到限制,需要对高级噪声滤波和硬件优化进行进一步研究。该设备有望用于各种应用,包括教育、艺术和治疗。该研究承认其局限性,并建议未来的研究推广将声音频率转换为颜色的模型,并提高更广泛听力损伤的可用性。来自听障社区的反馈将在进一步开发实用设备方面发挥关键作用。总的来说,这种将声音转换为颜色的创新设备代表着朝着改善听力障碍者的可及性和沟通迈出了重要一步。持续的研究提供了克服挑战的潜力,并将该设备的优势扩展到各个领域,最终提高听力障碍患者的生活质量。
{"title":"Design and Research of a Sound-to-RGB Smart Acoustic Device","authors":"Z. Zlatev, J. Ilieva, D. Orozova, G. Shivacheva, Nadezhda Angelova","doi":"10.3390/mti7080079","DOIUrl":"https://doi.org/10.3390/mti7080079","url":null,"abstract":"This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion of sound to color, supported by specialized automatic processing software suitable for standardization. Experimental evaluation shows excellent performance for frequencies below 1000 Hz, although limitations are encountered at higher frequencies, requiring further investigation into advanced noise filtering and hardware optimization. The device shows promise for various applications, including education, art, and therapy. The study acknowledges its limitations and suggests future research to generalize the models for converting sound frequencies to color and improving usability for a broader range of hearing impairments. Feedback from the hearing-impaired community will play a critical role in further developing the device for practical use. Overall, this innovative device for converting sound to color represents a significant step toward improving accessibility and communication for people with hearing challenges. Continued research offers the potential to overcome challenges and extend the benefits of the device to a variety of areas, ultimately improving the quality of life for people with hearing impairments.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46213482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Interaction for Cobot Using MQTT 使用MQTT的协作机器人多模式交互
IF 2.5 Q2 Computer Science Pub Date : 2023-08-03 DOI: 10.3390/mti7080078
J. Rouillard, Jean-Marc Vannobel
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation).
为了提高效率,人机和人机交互的设计必须考虑到多模态的思想。为了允许在几个不同的设备(计算机,智能手机,平板电脑等)上使用多种交互方式,例如使用语音,触摸,凝视跟踪,并集成可能的连接对象,有必要在系统的不同部分之间具有有效和安全的通信手段。使用协作机器人(cobot)共享相同的空间,在执行任务时非常接近人类,这一点就更加重要了。本研究介绍了在虚拟(Webots)和现实世界(ESP微控制器,Arduino, IOT2040)中使用MQTT协议的协作机器人的多模态交互领域的研究工作。我们将展示如何有效地使用MQTT,并为系统的多个实体提供公共发布/订阅机制,以便与连接的对象(如led和传送带)、机械臂(如Ned Niryo)或移动机器人进行交互。我们将MQTT的使用与Firebase实时数据库的使用进行了比较,这些数据库在我们之前的几个研究工作中使用过。我们展示了“挑选-等待-选择-放置”任务如何由协作机器人和人类共同执行,以及这在沟通和人体工程学规则方面意味着什么,通过健康或工业问题(残疾人和远程操作)。
{"title":"Multimodal Interaction for Cobot Using MQTT","authors":"J. Rouillard, Jean-Marc Vannobel","doi":"10.3390/mti7080078","DOIUrl":"https://doi.org/10.3390/mti7080078","url":null,"abstract":"For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation).","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43729126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhancing Object Detection for VIPs Using YOLOv4_Resnet101 and Text-to-Speech Conversion Model 利用YOLOv4_Resnet101和文本-语音转换模型增强vip对象检测
Q2 Computer Science Pub Date : 2023-08-02 DOI: 10.3390/mti7080077
Tahani Jaser Alahmadi, Atta Ur Rahman, Hend Khalid Alkahtani, Hisham Kholidy
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%.
视力受损会影响个人的生活质素,对视障人士在物体识别及日常工作等各方面带来挑战。以前的研究主要集中在开发视觉导航系统来帮助贵宾,但在准确性、速度和包含更广泛的物体类别方面需要进一步提高,这些物体类别可能会阻碍贵宾的日常生活。本研究提出了一个修改版本的YOLOv4_Resnet101作为骨干网络,在多个对象类上训练,以帮助贵宾导航他们的周围环境。与暗网相比,在YOLOv4中使用骨干,YOLOv4_Resnet101中的ResNet-101骨干提供了更深入,更强大的特征提取网络。ResNet-101的更大容量能够更好地表示复杂的视觉模式,从而提高了目标检测的准确性。使用微软公共对象上下文(MS COCO)数据集对该模型进行了验证。使用图像预处理技术来增强训练过程,手动标注确保所有图像的准确标记。该模块结合了文本到语音的转换,为vip提供听觉信息,以帮助识别障碍。经过4000次迭代的训练,该模型对从数据集中获得的测试图像的准确率达到96.34%,损失错误率为0.073%。
{"title":"Enhancing Object Detection for VIPs Using YOLOv4_Resnet101 and Text-to-Speech Conversion Model","authors":"Tahani Jaser Alahmadi, Atta Ur Rahman, Hend Khalid Alkahtani, Hisham Kholidy","doi":"10.3390/mti7080077","DOIUrl":"https://doi.org/10.3390/mti7080077","url":null,"abstract":"Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136383096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ability-Based Methods for Personalized Keyboard Generation. 基于能力的个性化键盘生成方法
IF 2.5 Q2 Computer Science Pub Date : 2022-08-01 Epub Date: 2022-08-03 DOI: 10.3390/mti6080067
Claire L Mitchell, Gabriel J Cler, Susan K Fager, Paola Contessa, Serge H Roy, Gianluca De Luca, Joshua C Kline, Jennifer M Vojtech

This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.

本研究介绍了一种基于能力的个性化键盘生成方法,利用个人自身的运动和人机交互数据自动计算个性化虚拟键盘布局。我们的方法整合了多方向点选择任务,以描述光标在时间、距离和方向上的控制特性。该特征描述可自动用于开发计算效率高的键盘布局,通过捕捉方向限制和偏好,优先考虑每个用户的移动能力。我们在一项有 16 名参与者参与的研究中评估了我们的方法,该研究使用惯性传感和面部肌电图作为访问方法,结果与通用优化键盘(47.9 比特/分钟)相比,使用个性化键盘的通信速率显著提高(52.0 比特/分钟)。我们的研究结果表明,可以有效地描述个人的运动能力,从而设计出个性化的键盘来改善交流。这项工作强调了在设计虚拟界面时整合用户运动能力的重要性。
{"title":"Ability-Based Methods for Personalized Keyboard Generation.","authors":"Claire L Mitchell, Gabriel J Cler, Susan K Fager, Paola Contessa, Serge H Roy, Gianluca De Luca, Joshua C Kline, Jennifer M Vojtech","doi":"10.3390/mti6080067","DOIUrl":"10.3390/mti6080067","url":null,"abstract":"<p><p>This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.</p>","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9608338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40436065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Replica Project: Co-Designing a Discovery Engine for Digital Art History 复制品项目:共同设计数字艺术史的发现引擎
IF 2.5 Q2 Computer Science Pub Date : 2022-01-01 DOI: 10.3390/mti6110100
I. D. Lenardo
{"title":"The Replica Project: Co-Designing a Discovery Engine for Digital Art History","authors":"I. D. Lenardo","doi":"10.3390/mti6110100","DOIUrl":"https://doi.org/10.3390/mti6110100","url":null,"abstract":"","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acknowledgement to Reviewers of MTI in 2019 向2019年MTI评审人员致谢
IF 2.5 Q2 Computer Science Pub Date : 2020-01-01 DOI: 10.3390/mti4010002
Mti Editorial Office
{"title":"Acknowledgement to Reviewers of MTI in 2019","authors":"Mti Editorial Office","doi":"10.3390/mti4010002","DOIUrl":"https://doi.org/10.3390/mti4010002","url":null,"abstract":"","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/mti4010002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Socrative in Higher Education: Game vs. Other Uses 高等教育中的句式:游戏vs.其他用途
IF 2.5 Q2 Computer Science Pub Date : 2019-07-06 DOI: 10.3390/MTI3030049
Fátima Faya Cerqueiro, Anastasia Harrison
The integration of clickers in Higher Education settings has proved to be particularly useful for enhancing motivation, engagement and performance; for developing cooperative or collaborative tasks; for checking understanding during the lesson; or even for assessment purposes. This paper explores and exemplifies three uses of Socrative, a mobile application specifically designed as a clicker for the classroom. Socrative was used during three sessions with the same group of first-year University students at a Faculty of Education. One of these sessions—a review lesson—was gamified, whereas the other two—a collaborative reading activity seminar, and a lecture—were not. Ad-hoc questionnaires were distributed after each of them. Results suggest that students welcome the use of clickers and that combining them with gamification strategies may increase students’ perceived satisfaction. The experiences described in this paper show how Socrative is an effective means of providing formative feedback and may actually save time during lessons.
事实证明,在高等教育设置中整合点击器对于提高动机、参与度和表现特别有用;发展合作或协作任务;用于在课堂上检查理解;甚至用于评估目的。本文探讨并举例说明了Socrative的三种用法,Socrative是一款专门设计用于课堂点击的移动应用程序。Socrative在同一组教育学院的一年级大学生中使用了三次。其中一个环节——复习课——被游戏化了,而另外两个环节——合作阅读活动研讨会和讲座——没有被游戏化。每次活动结束后都分发了特别问卷。结果表明,学生欢迎点击器的使用,将其与游戏化策略相结合可能会增加学生的感知满意度。本文中描述的经验表明,Socrative是一种提供形成性反馈的有效方法,实际上可以节省课堂上的时间。
{"title":"Socrative in Higher Education: Game vs. Other Uses","authors":"Fátima Faya Cerqueiro, Anastasia Harrison","doi":"10.3390/MTI3030049","DOIUrl":"https://doi.org/10.3390/MTI3030049","url":null,"abstract":"The integration of clickers in Higher Education settings has proved to be particularly useful for enhancing motivation, engagement and performance; for developing cooperative or collaborative tasks; for checking understanding during the lesson; or even for assessment purposes. This paper explores and exemplifies three uses of Socrative, a mobile application specifically designed as a clicker for the classroom. Socrative was used during three sessions with the same group of first-year University students at a Faculty of Education. One of these sessions—a review lesson—was gamified, whereas the other two—a collaborative reading activity seminar, and a lecture—were not. Ad-hoc questionnaires were distributed after each of them. Results suggest that students welcome the use of clickers and that combining them with gamification strategies may increase students’ perceived satisfaction. The experiences described in this paper show how Socrative is an effective means of providing formative feedback and may actually save time during lessons.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2019-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI3030049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective 通过触摸向Nao机器人传达情感:用户体验视角
IF 2.5 Q2 Computer Science Pub Date : 2018-12-16 DOI: 10.3390/MTI2040082
Beatrice Alenljung, Rebecca Andreasson, Robert J. Lowe, E. Billing, J. Lindblom
Social robots are expected gradually to be used by more and more people in a wider range of settings, domestic as well as professional. As a consequence, the features and quality requirements on human–robot interaction will increase, comprising possibilities to communicate emotions, establishing a positive user experience, e.g., using touch. In this paper, the focus is on depicting how humans, as the users of robots, experience tactile emotional communication with the Nao Robot, as well as identifying aspects affecting the experience and touch behavior. A qualitative investigation was conducted as part of a larger experiment. The major findings consist of 15 different aspects that vary along one or more dimensions and how those influence the four dimensions of user experience that are present in the study, as well as the different parts of touch behavior of conveying emotions.
社交机器人有望逐渐被越来越多的人在更广泛的环境中使用,包括家庭和专业领域。因此,人机交互的功能和质量要求将会增加,包括交流情感的可能性,建立积极的用户体验,例如,使用触摸。本文的重点是描述人类作为机器人的使用者如何与Nao机器人进行触觉情感交流,以及识别影响体验和触摸行为的因素。定性调查是作为一个更大实验的一部分进行的。主要发现包括15个不同的方面,这些方面沿着一个或多个维度变化,以及这些方面如何影响研究中出现的用户体验的四个维度,以及传达情感的触摸行为的不同部分。
{"title":"Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective","authors":"Beatrice Alenljung, Rebecca Andreasson, Robert J. Lowe, E. Billing, J. Lindblom","doi":"10.3390/MTI2040082","DOIUrl":"https://doi.org/10.3390/MTI2040082","url":null,"abstract":"Social robots are expected gradually to be used by more and more people in a wider range of settings, domestic as well as professional. As a consequence, the features and quality requirements on human–robot interaction will increase, comprising possibilities to communicate emotions, establishing a positive user experience, e.g., using touch. In this paper, the focus is on depicting how humans, as the users of robots, experience tactile emotional communication with the Nao Robot, as well as identifying aspects affecting the experience and touch behavior. A qualitative investigation was conducted as part of a larger experiment. The major findings consist of 15 different aspects that vary along one or more dimensions and how those influence the four dimensions of user experience that are present in the study, as well as the different parts of touch behavior of conveying emotions.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2040082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks 基于并发增强转换网络的自然多模态接口语义融合
IF 2.5 Q2 Computer Science Pub Date : 2018-12-06 DOI: 10.3390/MTI2040081
C. Zimmerer, Martin Fischbach, Marc Erich Latoschik
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.
语义融合是许多多模态接口的核心要求。有限状态传感器和增强转换网络等程序性方法已被证明有利于实现语义融合。它们符合用户界面开发中常见的快速开发周期,而机器学习方法则需要花费大量时间进行培训和优化。我们确定了实现语义融合的七个基本要求:动作派生、持续反馈、上下文敏感性、时间关系支持、对交互上下文的访问以及对时间顺序未排序和概率输入的支持。然而,随后的分析显示,目前没有满足后两项要求的解决办法。因此,作为本文的主要贡献,我们提出了并发游标的概念来弥补这些缺点。此外,我们还展示了一个参考实现,即并发增强过渡网络(cATN),该网络通过一系列概念验证演示以及比较基准验证了该概念的可行性。cATN满足了所有确定的需求,并填补了以前解决方案中的不足。它通过五个具体特征来支持多模态接口的快速原型化:它的声明性、底层转换网络的递归性、描述语言的网络抽象构造、所利用的语义查询和词汇信息的抽象层。我们的参考实现过去和现在都用于各种学生项目、论文和硕士课程。它是公开可用的,并展示了即使是非专家也可以有效地实现多模式接口,甚至对于混合现实和虚拟现实中的重要应用程序也是如此。
{"title":"Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks","authors":"C. Zimmerer, Martin Fischbach, Marc Erich Latoschik","doi":"10.3390/MTI2040081","DOIUrl":"https://doi.org/10.3390/MTI2040081","url":null,"abstract":"Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2040081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Multimodal Technologies and Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1