AI-enhanced interview simulation in the metaverse: Transforming professional skills training through VR and generative conversational AI

Abdullah Bin Nofal , Hassan Ali , Muhammad Hadi , Aizaz Ahmad , Adnan Qayyum , Aditya Johri , Ala Al-Fuqaha , Junaid Qadir
{"title":"AI-enhanced interview simulation in the metaverse: Transforming professional skills training through VR and generative conversational AI","authors":"Abdullah Bin Nofal ,&nbsp;Hassan Ali ,&nbsp;Muhammad Hadi ,&nbsp;Aizaz Ahmad ,&nbsp;Adnan Qayyum ,&nbsp;Aditya Johri ,&nbsp;Ala Al-Fuqaha ,&nbsp;Junaid Qadir","doi":"10.1016/j.caeai.2024.100347","DOIUrl":null,"url":null,"abstract":"<div><div>Interviewing skills play a pivotal role in the job application and search, professional development to prepare for interviewing is a neglected area of research. Professional training methods are available but are often prohibitively expensive, limiting opportunities primarily to privileged individuals. To bridge this accessibility gap and democratize access to job opportunities, there is a need to develop automated interview simulation platforms. The advent of Generative AI (GenAI) technology, in particular Large Language Models (LLMs), makes this is viable proposition but progress is hindered by the absence of open-source implementations for reproducibility and comparison, as well as the lack of suitable evaluation benchmarks and experimental setups. In particular, we do not yet know how robust such systems are and if they will be bias-free, factors that will contribute to their acceptability and use. To this end, we propose <u>I</u>nterview <u>T</u>raining and <u>E</u>ducation <u>M</u>odule (<em>ITEM</em>), a job interview training module that combines Virtual Reality-based metaverse technology with LLM-based GenAI models. Our module creates realistic interview experiences for skill enhancement, complete with personalized feedback and improvement guidelines based on user responses. In this paper, we present an experimental evaluation of the module to ascertain its robustness, including a bias analysis. Firstly, we establish an experimental setup to gauge platform robustness by examining question similarity across varied prompts using Bidirectional and Auto-Regressive Transformers (BART) and topic modeling. Subsequently, we explore biases in three categories—country of origin, religion, and gender—by analyzing <em>ITEM</em>'s evaluation scores while manipulating candidate backgrounds, all while keeping their responses unchanged. Our findings indicate potential biases replicated by <em>ITEM</em>, highlighting the need for caution in its application for personal development and training. This pioneering initiative introduces the first open-source module for job interview training within a virtual metaverse, leveraging LLM-based Generative AI, designed for extension and testing by the scientific community, thereby enhancing insights into the limitations and ethical considerations of AI-driven interview simulation platforms.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100347"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666920X24001504","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

Interviewing skills play a pivotal role in the job application and search, professional development to prepare for interviewing is a neglected area of research. Professional training methods are available but are often prohibitively expensive, limiting opportunities primarily to privileged individuals. To bridge this accessibility gap and democratize access to job opportunities, there is a need to develop automated interview simulation platforms. The advent of Generative AI (GenAI) technology, in particular Large Language Models (LLMs), makes this is viable proposition but progress is hindered by the absence of open-source implementations for reproducibility and comparison, as well as the lack of suitable evaluation benchmarks and experimental setups. In particular, we do not yet know how robust such systems are and if they will be bias-free, factors that will contribute to their acceptability and use. To this end, we propose Interview Training and Education Module (ITEM), a job interview training module that combines Virtual Reality-based metaverse technology with LLM-based GenAI models. Our module creates realistic interview experiences for skill enhancement, complete with personalized feedback and improvement guidelines based on user responses. In this paper, we present an experimental evaluation of the module to ascertain its robustness, including a bias analysis. Firstly, we establish an experimental setup to gauge platform robustness by examining question similarity across varied prompts using Bidirectional and Auto-Regressive Transformers (BART) and topic modeling. Subsequently, we explore biases in three categories—country of origin, religion, and gender—by analyzing ITEM's evaluation scores while manipulating candidate backgrounds, all while keeping their responses unchanged. Our findings indicate potential biases replicated by ITEM, highlighting the need for caution in its application for personal development and training. This pioneering initiative introduces the first open-source module for job interview training within a virtual metaverse, leveraging LLM-based Generative AI, designed for extension and testing by the scientific community, thereby enhancing insights into the limitations and ethical considerations of AI-driven interview simulation platforms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
虚拟世界中人工智能增强的面试模拟:通过VR和生成式会话人工智能转变专业技能培训
面试技巧在求职和求职中起着举足轻重的作用,为面试做准备的专业发展是一个被忽视的研究领域。虽然有专业培训方法,但往往过于昂贵,使机会主要局限于享有特权的个人。为了弥合这种可访问性差距并使获得工作机会的机会民主化,有必要开发自动化面试模拟平台。生成式人工智能(GenAI)技术的出现,特别是大型语言模型(llm),使这成为一个可行的命题,但由于缺乏可重复性和比较的开源实现,以及缺乏合适的评估基准和实验设置,进展受到阻碍。特别是,我们还不知道这样的系统有多强大,以及它们是否没有偏见,这些因素将有助于它们的可接受性和使用。为此,我们提出了面试培训与教育模块(Interview Training and Education Module, ITEM),这是一个结合了基于虚拟现实的虚拟世界技术和基于llm的GenAI模型的面试培训模块。我们的模块为技能提升创造了真实的面试体验,并根据用户的反馈完成了个性化的反馈和改进指南。在本文中,我们提出了一个实验评估模块,以确定其稳健性,包括偏差分析。首先,我们建立了一个实验设置,通过使用双向和自回归变压器(BART)和主题建模来检查不同提示的问题相似性来衡量平台的鲁棒性。随后,我们在操纵候选人背景的同时,在保持他们的回答不变的情况下,通过分析ITEM的评估分数,探索了三个类别的偏见——原籍国、宗教和性别。我们的研究结果表明,ITEM也存在潜在的偏见,强调在将其应用于个人发展和培训时需要谨慎。这一开创性的举措引入了虚拟元宇宙中第一个用于面试培训的开源模块,利用基于法学硕士的生成式人工智能,旨在由科学界进行扩展和测试,从而增强对人工智能驱动的面试模拟平台的局限性和伦理考虑的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
16.80
自引率
0.00%
发文量
66
审稿时长
50 days
期刊最新文献
Conversational AI in children's home literacy learning: effectiveness, advantages, challenges, and family perception Enhancing AI literacy for educators: Where to start and to what end? Artificial intelligence literacy at school: A systematic review with a focus on psychological foundations Large language models for education: An open-source paradigm for automated Q&A in the graduate classroom Generative AI in higher education: A bibliometric review of emerging trends, power dynamics, and global research landscapes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1