HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis

Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque
{"title":"HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis","authors":"Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque","doi":"10.1109/aciiw52867.2021.9666201","DOIUrl":null,"url":null,"abstract":"There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/aciiw52867.2021.9666201","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HirePreter:为自动化面试分析提供细粒度解释的框架
通过从视频面试中捕捉情感信号来筛选潜在求职者的自动化技术有所增加。这些工具可以使面试过程具有可扩展性和客观性,但它们通常几乎没有提供机器学习模型如何做出影响数千人生计的关键决策的信息。通过结合多实例学习和基于语言建模的模型,我们建立了一个集成模型,可以预测应聘者是否应该被录用。使用特定于模型和不可知模型的解释技术,我们可以破译最具信息量的时间段和驱动模型决策的特征。我们的分析还表明,我们的模型受到视频开头和结尾部分的显著影响。我们的模型在预测应聘者是否应该在ETS的求职面试数据集中被录用方面达到了75.3%的准确率。我们的方法可以扩展到解释其他基于视频的情感计算任务,如分析情绪、衡量可信度或指导个人在团队中更有效地合作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool Discrete versus Ordinal Time-Continuous Believability Assessment Event Representation and Semantics Processing System for F-2 Companion Robot Multimodal Convolutional Neural Network Model for Protective Behavior Detection based on Body Movement Data Unbiased Mimic Activity Evaluation: F2F Emotion Studio Software
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1