首页 > 最新文献

Multimodal Technologies and Interaction最新文献

英文 中文
Virtual Urban Field Studies: Evaluating Urban Interaction Design Using Context-Based Interface Prototypes 虚拟城市实地研究:使用基于上下文的界面原型评估城市交互设计
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-18 DOI: 10.3390/mti7080082
Robert Dongas, Kazjon Grace, Samuel Gillespie, Marius Hoggenmueller, M. Tomitsch, Stewart Worrall
In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged.
在这项研究中,我们建议使用虚拟城市现场研究(VUFS),通过基于上下文的界面原型来评估听觉界面的交互设计。虚拟现场测试使用混合现实技术,将真实世界测试的保真度与实验室测试的可负担性和速度相结合。在本文中,我们将这一概念应用于具有高度真实感和保真度的自动驾驶汽车(AV) -行人交互的快速测试声音设计。我们还建议使用心理测量学验证的存在措施来验证VUFS的真实性。使用混合的定性和定量方法,我们分析了用户在我们的VUFS原型中存在的感知以及与我们的原型有效性的关系。我们还研究了高阶双声空间化音频的使用及其对存在的影响。我们的研究结果提供了如何设计VUFS以促进呈现的见解,以及如何利用这一点的设计指导方针。
{"title":"Virtual Urban Field Studies: Evaluating Urban Interaction Design Using Context-Based Interface Prototypes","authors":"Robert Dongas, Kazjon Grace, Samuel Gillespie, Marius Hoggenmueller, M. Tomitsch, Stewart Worrall","doi":"10.3390/mti7080082","DOIUrl":"https://doi.org/10.3390/mti7080082","url":null,"abstract":"In this study, we propose the use of virtual urban field studies (VUFS) through context-based interface prototypes for evaluating the interaction design of auditory interfaces. Virtual field tests use mixed-reality technologies to combine the fidelity of real-world testing with the affordability and speed of testing in the lab. In this paper, we apply this concept to rapidly test sound designs for autonomous vehicle (AV)–pedestrian interaction with a high degree of realism and fidelity. We also propose the use of psychometrically validated measures of presence in validating the verisimilitude of VUFS. Using mixed qualitative and quantitative methods, we analysed users’ perceptions of presence in our VUFS prototype and the relationship to our prototype’s effectiveness. We also examined the use of higher-order ambisonic spatialised audio and its impact on presence. Our results provide insights into how VUFS can be designed to facilitate presence as well as design guidelines for how this can be leveraged.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46890537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creative Use of OpenAI in Education: Case Studies from Game Development OpenAI在教育中的创造性使用:来自游戏开发的案例研究
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-18 DOI: 10.3390/mti7080081
Fiona French, David Levi, Csaba Maczo, Aiste Simonaityte, Stefanos Triantafyllidis, Gergo Varda
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design.
教育工作者和学生对生成式人工智能(AI)技术支持学生学习成果的潜力表现出极大的兴趣,例如,通过提供个性化体验、24小时对话辅助、文本编辑和解决问题的帮助。我们回顾了人工智能在教育背景下作为一种工具的价值的当代观点,并描述了我们最近对本科生的研究,讨论了我们为什么以及如何在2022-2023学年将OpenAI工具ChatGPT和Dall-E整合到课程中。伦敦城市大学(London Metropolitan University)计算与数字媒体学院(School of Computing and Digital Media)的一小群游戏编程学生接到了一项研究和开发任务,明确要求他们使用OpenAI。他们的任务是在游戏开发的背景下评估OpenAI工具,展示一个可行的解决方案,并报告他们的发现。我们提出了五个案例研究,展示了学生的一些成果,并讨论了他们的工作。这种评估模式既富有成效又受欢迎,它切合学生的兴趣,有助于提高他们在编程、解决问题、批判性反思和探索性设计方面的技能。
{"title":"Creative Use of OpenAI in Education: Case Studies from Game Development","authors":"Fiona French, David Levi, Csaba Maczo, Aiste Simonaityte, Stefanos Triantafyllidis, Gergo Varda","doi":"10.3390/mti7080081","DOIUrl":"https://doi.org/10.3390/mti7080081","url":null,"abstract":"Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46287141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
“From Gamers into Environmental Citizens”: A Systematic Literature Review of Empirical Research on Behavior Change Games for Environmental Citizenship “从博弈者到环境公民”:环境公民行为变化博弈实证研究的系统文献综述
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-14 DOI: 10.3390/mti7080080
Yiannis Georgiou, A. Hadjichambis, D. Paraskeva-Hadjichambi, A. Adamou
As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into “environmental agents of change”, seeking to achieve more sustainable lifestyles. Despite these arguments, studies in this area are thinly spread and fragmented across various research domains. This article is grounded on a systematic review of empirical articles on BCGs for EC covering a time span of fifteen years and published in peer-reviewed journals and conference proceedings, in order to provide an understanding of the scope of empirical research in the field. In total, 44 articles were reviewed to shed light on their methodological underpinnings, the gaming elements and the persuasive strategies of the deployed BCGs, the EC actions facilitated by the BCGs, and the impact of BCGs on players’ EC competences. Our findings indicate that while BCGs seem to promote pro-environmental knowledge and attitudes, such an assertion is not fully warranted for pro-environmental behaviors. We reflect on our findings and provide future research directions to push forward the field of BCGs for EC.
随着全球环境危机的加剧,人们对行为改变游戏(bcg)产生了浓厚的兴趣,因为它是一种授权玩家支持环境保护主义的可行场所。这种支持环境的授权与环境公民的概念非常一致,其目的是将公民转变为“环境变革的推动者”,寻求实现更可持续的生活方式。尽管存在这些争论,但这一领域的研究在不同的研究领域中分散开来。本文基于对15年来发表在同行评议期刊和会议论文集上的关于EC的BCGs的实证文章的系统综述,以提供对该领域实证研究范围的理解。总共有44篇文章进行了回顾,以阐明他们的方法基础,部署的游戏元素和说服策略,由BCGs促进的EC行动,以及BCGs对玩家EC能力的影响。我们的研究结果表明,虽然BCGs似乎促进了亲环境知识和态度,但这种说法并不完全适用于亲环境行为。我们对研究结果进行了反思,并提出了未来的研究方向,以推动BCGs在EC领域的发展。
{"title":"“From Gamers into Environmental Citizens”: A Systematic Literature Review of Empirical Research on Behavior Change Games for Environmental Citizenship","authors":"Yiannis Georgiou, A. Hadjichambis, D. Paraskeva-Hadjichambi, A. Adamou","doi":"10.3390/mti7080080","DOIUrl":"https://doi.org/10.3390/mti7080080","url":null,"abstract":"As the global environmental crisis intensifies, there has been a significant interest in behavior change games (BCGs), as a viable venue to empower players’ pro-environmentalism. This pro-environmental empowerment is well-aligned with the notion of environmental citizenship (EC), which aims at transforming citizens into “environmental agents of change”, seeking to achieve more sustainable lifestyles. Despite these arguments, studies in this area are thinly spread and fragmented across various research domains. This article is grounded on a systematic review of empirical articles on BCGs for EC covering a time span of fifteen years and published in peer-reviewed journals and conference proceedings, in order to provide an understanding of the scope of empirical research in the field. In total, 44 articles were reviewed to shed light on their methodological underpinnings, the gaming elements and the persuasive strategies of the deployed BCGs, the EC actions facilitated by the BCGs, and the impact of BCGs on players’ EC competences. Our findings indicate that while BCGs seem to promote pro-environmental knowledge and attitudes, such an assertion is not fully warranted for pro-environmental behaviors. We reflect on our findings and provide future research directions to push forward the field of BCGs for EC.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46996311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Research of a Sound-to-RGB Smart Acoustic Device 一种声转RGB智能声学器件的设计与研究
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-13 DOI: 10.3390/mti7080079
Z. Zlatev, J. Ilieva, D. Orozova, G. Shivacheva, Nadezhda Angelova
This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion of sound to color, supported by specialized automatic processing software suitable for standardization. Experimental evaluation shows excellent performance for frequencies below 1000 Hz, although limitations are encountered at higher frequencies, requiring further investigation into advanced noise filtering and hardware optimization. The device shows promise for various applications, including education, art, and therapy. The study acknowledges its limitations and suggests future research to generalize the models for converting sound frequencies to color and improving usability for a broader range of hearing impairments. Feedback from the hearing-impaired community will play a critical role in further developing the device for practical use. Overall, this innovative device for converting sound to color represents a significant step toward improving accessibility and communication for people with hearing challenges. Continued research offers the potential to overcome challenges and extend the benefits of the device to a variety of areas, ultimately improving the quality of life for people with hearing impairments.
本文介绍了一种将声波频率转换为颜色的设备,以帮助有听力问题的人解决听障社区的无障碍和沟通问题。该设备使用精确的数学仪器和精心选择的硬件,在适合标准化的专业自动处理软件的支持下,实现声音到颜色的精确转换。实验评估显示,在1000Hz以下的频率下具有优异的性能,尽管在更高的频率下会遇到限制,需要对高级噪声滤波和硬件优化进行进一步研究。该设备有望用于各种应用,包括教育、艺术和治疗。该研究承认其局限性,并建议未来的研究推广将声音频率转换为颜色的模型,并提高更广泛听力损伤的可用性。来自听障社区的反馈将在进一步开发实用设备方面发挥关键作用。总的来说,这种将声音转换为颜色的创新设备代表着朝着改善听力障碍者的可及性和沟通迈出了重要一步。持续的研究提供了克服挑战的潜力,并将该设备的优势扩展到各个领域,最终提高听力障碍患者的生活质量。
{"title":"Design and Research of a Sound-to-RGB Smart Acoustic Device","authors":"Z. Zlatev, J. Ilieva, D. Orozova, G. Shivacheva, Nadezhda Angelova","doi":"10.3390/mti7080079","DOIUrl":"https://doi.org/10.3390/mti7080079","url":null,"abstract":"This paper presents a device that converts sound wave frequencies into colors to assist people with hearing problems in solving accessibility and communication problems in the hearing-impaired community. The device uses a precise mathematical apparatus and carefully selected hardware to achieve accurate conversion of sound to color, supported by specialized automatic processing software suitable for standardization. Experimental evaluation shows excellent performance for frequencies below 1000 Hz, although limitations are encountered at higher frequencies, requiring further investigation into advanced noise filtering and hardware optimization. The device shows promise for various applications, including education, art, and therapy. The study acknowledges its limitations and suggests future research to generalize the models for converting sound frequencies to color and improving usability for a broader range of hearing impairments. Feedback from the hearing-impaired community will play a critical role in further developing the device for practical use. Overall, this innovative device for converting sound to color represents a significant step toward improving accessibility and communication for people with hearing challenges. Continued research offers the potential to overcome challenges and extend the benefits of the device to a variety of areas, ultimately improving the quality of life for people with hearing impairments.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46213482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Interaction for Cobot Using MQTT 使用MQTT的协作机器人多模式交互
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-03 DOI: 10.3390/mti7080078
J. Rouillard, Jean-Marc Vannobel
For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation).
为了提高效率,人机和人机交互的设计必须考虑到多模态的思想。为了允许在几个不同的设备(计算机,智能手机,平板电脑等)上使用多种交互方式,例如使用语音,触摸,凝视跟踪,并集成可能的连接对象,有必要在系统的不同部分之间具有有效和安全的通信手段。使用协作机器人(cobot)共享相同的空间,在执行任务时非常接近人类,这一点就更加重要了。本研究介绍了在虚拟(Webots)和现实世界(ESP微控制器,Arduino, IOT2040)中使用MQTT协议的协作机器人的多模态交互领域的研究工作。我们将展示如何有效地使用MQTT,并为系统的多个实体提供公共发布/订阅机制,以便与连接的对象(如led和传送带)、机械臂(如Ned Niryo)或移动机器人进行交互。我们将MQTT的使用与Firebase实时数据库的使用进行了比较,这些数据库在我们之前的几个研究工作中使用过。我们展示了“挑选-等待-选择-放置”任务如何由协作机器人和人类共同执行,以及这在沟通和人体工程学规则方面意味着什么,通过健康或工业问题(残疾人和远程操作)。
{"title":"Multimodal Interaction for Cobot Using MQTT","authors":"J. Rouillard, Jean-Marc Vannobel","doi":"10.3390/mti7080078","DOIUrl":"https://doi.org/10.3390/mti7080078","url":null,"abstract":"For greater efficiency, human–machine and human–robot interactions must be designed with the idea of multimodality in mind. To allow the use of several interaction modalities, such as the use of voice, touch, gaze tracking, on several different devices (computer, smartphone, tablets, etc.) and to integrate possible connected objects, it is necessary to have an effective and secure means of communication between the different parts of the system. This is even more important with the use of a collaborative robot (cobot) sharing the same space and very close to the human during their tasks. This study present research work in the field of multimodal interaction for a cobot using the MQTT protocol, in virtual (Webots) and real worlds (ESP microcontrollers, Arduino, IOT2040). We show how MQTT can be used efficiently, with a common publish/subscribe mechanism for several entities of the system, in order to interact with connected objects (like LEDs and conveyor belts), robotic arms (like the Ned Niryo), or mobile robots. We compare the use of MQTT with that of the Firebase Realtime Database used in several of our previous research works. We show how a “pick–wait–choose–and place” task can be carried out jointly by a cobot and a human, and what this implies in terms of communication and ergonomic rules, via health or industrial concerns (people with disabilities, and teleoperation).","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43729126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhancing Object Detection for VIPs Using YOLOv4_Resnet101 and Text-to-Speech Conversion Model 利用YOLOv4_Resnet101和文本-语音转换模型增强vip对象检测
Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-02 DOI: 10.3390/mti7080077
Tahani Jaser Alahmadi, Atta Ur Rahman, Hend Khalid Alkahtani, Hisham Kholidy
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%.
视力受损会影响个人的生活质素,对视障人士在物体识别及日常工作等各方面带来挑战。以前的研究主要集中在开发视觉导航系统来帮助贵宾,但在准确性、速度和包含更广泛的物体类别方面需要进一步提高,这些物体类别可能会阻碍贵宾的日常生活。本研究提出了一个修改版本的YOLOv4_Resnet101作为骨干网络,在多个对象类上训练,以帮助贵宾导航他们的周围环境。与暗网相比,在YOLOv4中使用骨干,YOLOv4_Resnet101中的ResNet-101骨干提供了更深入,更强大的特征提取网络。ResNet-101的更大容量能够更好地表示复杂的视觉模式,从而提高了目标检测的准确性。使用微软公共对象上下文(MS COCO)数据集对该模型进行了验证。使用图像预处理技术来增强训练过程,手动标注确保所有图像的准确标记。该模块结合了文本到语音的转换,为vip提供听觉信息,以帮助识别障碍。经过4000次迭代的训练,该模型对从数据集中获得的测试图像的准确率达到96.34%,损失错误率为0.073%。
{"title":"Enhancing Object Detection for VIPs Using YOLOv4_Resnet101 and Text-to-Speech Conversion Model","authors":"Tahani Jaser Alahmadi, Atta Ur Rahman, Hend Khalid Alkahtani, Hisham Kholidy","doi":"10.3390/mti7080077","DOIUrl":"https://doi.org/10.3390/mti7080077","url":null,"abstract":"Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136383096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ability-Based Methods for Personalized Keyboard Generation. 基于能力的个性化键盘生成方法
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-08-01 Epub Date: 2022-08-03 DOI: 10.3390/mti6080067
Claire L Mitchell, Gabriel J Cler, Susan K Fager, Paola Contessa, Serge H Roy, Gianluca De Luca, Joshua C Kline, Jennifer M Vojtech

This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.

本研究介绍了一种基于能力的个性化键盘生成方法,利用个人自身的运动和人机交互数据自动计算个性化虚拟键盘布局。我们的方法整合了多方向点选择任务,以描述光标在时间、距离和方向上的控制特性。该特征描述可自动用于开发计算效率高的键盘布局,通过捕捉方向限制和偏好,优先考虑每个用户的移动能力。我们在一项有 16 名参与者参与的研究中评估了我们的方法,该研究使用惯性传感和面部肌电图作为访问方法,结果与通用优化键盘(47.9 比特/分钟)相比,使用个性化键盘的通信速率显著提高(52.0 比特/分钟)。我们的研究结果表明,可以有效地描述个人的运动能力,从而设计出个性化的键盘来改善交流。这项工作强调了在设计虚拟界面时整合用户运动能力的重要性。
{"title":"Ability-Based Methods for Personalized Keyboard Generation.","authors":"Claire L Mitchell, Gabriel J Cler, Susan K Fager, Paola Contessa, Serge H Roy, Gianluca De Luca, Joshua C Kline, Jennifer M Vojtech","doi":"10.3390/mti6080067","DOIUrl":"10.3390/mti6080067","url":null,"abstract":"<p><p>This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.</p>","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"6 8","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9608338/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40436065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Replica Project: Co-Designing a Discovery Engine for Digital Art History 复制品项目:共同设计数字艺术史的发现引擎
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-01-01 DOI: 10.3390/mti6110100
I. D. Lenardo
{"title":"The Replica Project: Co-Designing a Discovery Engine for Digital Art History","authors":"I. D. Lenardo","doi":"10.3390/mti6110100","DOIUrl":"https://doi.org/10.3390/mti6110100","url":null,"abstract":"","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"6 1","pages":"100"},"PeriodicalIF":2.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acknowledgement to Reviewers of MTI in 2019 向2019年MTI评审人员致谢
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-01-01 DOI: 10.3390/mti4010002
Mti Editorial Office
{"title":"Acknowledgement to Reviewers of MTI in 2019","authors":"Mti Editorial Office","doi":"10.3390/mti4010002","DOIUrl":"https://doi.org/10.3390/mti4010002","url":null,"abstract":"","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"4 1","pages":"2"},"PeriodicalIF":2.5,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/mti4010002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Socrative in Higher Education: Game vs. Other Uses 高等教育中的句式:游戏vs.其他用途
IF 2.5 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2019-07-06 DOI: 10.3390/MTI3030049
Fátima Faya Cerqueiro, Anastasia Harrison
The integration of clickers in Higher Education settings has proved to be particularly useful for enhancing motivation, engagement and performance; for developing cooperative or collaborative tasks; for checking understanding during the lesson; or even for assessment purposes. This paper explores and exemplifies three uses of Socrative, a mobile application specifically designed as a clicker for the classroom. Socrative was used during three sessions with the same group of first-year University students at a Faculty of Education. One of these sessions—a review lesson—was gamified, whereas the other two—a collaborative reading activity seminar, and a lecture—were not. Ad-hoc questionnaires were distributed after each of them. Results suggest that students welcome the use of clickers and that combining them with gamification strategies may increase students’ perceived satisfaction. The experiences described in this paper show how Socrative is an effective means of providing formative feedback and may actually save time during lessons.
事实证明,在高等教育设置中整合点击器对于提高动机、参与度和表现特别有用;发展合作或协作任务;用于在课堂上检查理解;甚至用于评估目的。本文探讨并举例说明了Socrative的三种用法,Socrative是一款专门设计用于课堂点击的移动应用程序。Socrative在同一组教育学院的一年级大学生中使用了三次。其中一个环节——复习课——被游戏化了,而另外两个环节——合作阅读活动研讨会和讲座——没有被游戏化。每次活动结束后都分发了特别问卷。结果表明,学生欢迎点击器的使用,将其与游戏化策略相结合可能会增加学生的感知满意度。本文中描述的经验表明,Socrative是一种提供形成性反馈的有效方法,实际上可以节省课堂上的时间。
{"title":"Socrative in Higher Education: Game vs. Other Uses","authors":"Fátima Faya Cerqueiro, Anastasia Harrison","doi":"10.3390/MTI3030049","DOIUrl":"https://doi.org/10.3390/MTI3030049","url":null,"abstract":"The integration of clickers in Higher Education settings has proved to be particularly useful for enhancing motivation, engagement and performance; for developing cooperative or collaborative tasks; for checking understanding during the lesson; or even for assessment purposes. This paper explores and exemplifies three uses of Socrative, a mobile application specifically designed as a clicker for the classroom. Socrative was used during three sessions with the same group of first-year University students at a Faculty of Education. One of these sessions—a review lesson—was gamified, whereas the other two—a collaborative reading activity seminar, and a lecture—were not. Ad-hoc questionnaires were distributed after each of them. Results suggest that students welcome the use of clickers and that combining them with gamification strategies may increase students’ perceived satisfaction. The experiences described in this paper show how Socrative is an effective means of providing formative feedback and may actually save time during lessons.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"3 1","pages":"49"},"PeriodicalIF":2.5,"publicationDate":"2019-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI3030049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Multimodal Technologies and Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1